[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2016163183A1 - Head-mounted display system and computer program for presenting real space surrounding environment of user in immersive virtual space - Google Patents

Head-mounted display system and computer program for presenting real space surrounding environment of user in immersive virtual space Download PDF

Info

Publication number
WO2016163183A1
WO2016163183A1 PCT/JP2016/056686 JP2016056686W WO2016163183A1 WO 2016163183 A1 WO2016163183 A1 WO 2016163183A1 JP 2016056686 W JP2016056686 W JP 2016056686W WO 2016163183 A1 WO2016163183 A1 WO 2016163183A1
Authority
WO
WIPO (PCT)
Prior art keywords
virtual space
hmd
surrounding environment
image
real
Prior art date
Application number
PCT/JP2016/056686
Other languages
French (fr)
Japanese (ja)
Inventor
栗原秀行
Original Assignee
株式会社コロプラ
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社コロプラ filed Critical 株式会社コロプラ
Publication of WO2016163183A1 publication Critical patent/WO2016163183A1/en

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/25Output arrangements for video game devices
    • A63F13/26Output arrangements for video game devices having at least one additional display device, e.g. on the game controller or outside a game booth
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/53Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/60Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
    • A63F13/65Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention in a situation where a user wears a head-mounted display (HMD) on the head and is immersed in the three-dimensional virtual space, information related to the surrounding environment of the real space is displayed on the HMD in association with the three-dimensional virtual space.
  • the present invention relates to a head mounted display system including a computer to be controlled by the computer, and a computer program for causing the computer to function.
  • an HMD that is mounted on the user's head and can present an immersive three-dimensional virtual space image to the user by a display or the like disposed in front of the eyes.
  • a 360-degree panoramic image can be displayed in a three-dimensional virtual space.
  • Such an HMD typically includes various sensors (for example, an angle sensor or an angle sensor) and measures posture data of the HMD main body.
  • Such an HMD increases a sense of immersion in the video world and improves entertainment properties for the user.
  • the user can understand the environment around him / herself in real space as the user feels more immersed in the three-dimensional virtual space by wearing a non-transparent HMD that completely covers the user's field of view. Disappear. For example, even if there is another person close to the user wearing the HMD, the user often does not notice the presence.
  • each screen example shown in FIG. 1 is an example of a three-dimensional virtual space in which the user is immersed.
  • FIG. 1A shows a screen used in a virtual multi-display application in which a plurality of virtual televisions are arranged in a predetermined view area in a three-dimensional virtual space.
  • FIG. 1B is a screen used in an action-type RPG game application in which a user character moves around in a three-dimensional virtual space plane and battles an enemy character.
  • the present invention presents information on the surrounding environment in the three-dimensional virtual space so that the user can detect the surrounding environment in the real space when the user is wearing the HMD and is immersed in the three-dimensional virtual space.
  • the purpose is to inform the user.
  • a head-mounted display (HMD) system displays a virtual space image generated based on virtual space information and immerses the user in the virtual space.
  • An HMD that generates a peripheral image of a real space of the user, and a computer connected to the HMD and the real-time camera, and acquires the peripheral image from the real-time camera and uses the acquired peripheral image
  • a computer configured to detect the surrounding environment of the real space and output information related to the surrounding environment to the HMD in association with the virtual space information.
  • the detection of the surrounding environment using the surrounding image includes detection by human face recognition on the surrounding image, and the output of information on the surrounding environment includes notification of the number of people in the surrounding environment. . Further, the output of information related to the surrounding environment includes displaying the real space video from the real time camera on the HMD as a part of the virtual space image by displaying the video on the virtual display provided in the virtual space. In addition, the output of information related to the surrounding environment includes superimposing the character image on the virtual space image so that the character corresponding to the person is placed in the virtual space.
  • a real-time camera and a head-mounted display are connected to the computer.
  • a peripheral image acquisition unit that acquires a peripheral image of the real space
  • an environment detection unit that detects the peripheral environment of the real space using the acquired peripheral image
  • the above-mentioned computer is made to function as an output unit.
  • a real-time camera for photographing a surrounding environment in a real space is presented as a surrounding environment detection sensor, particularly a human detection sensor by presenting information on the surrounding environment in the real space in a three-dimensional virtual space. It is possible for a user who is immersed in the three-dimensional virtual space to detect a change in his / her surrounding environment without removing the HMD.
  • FIG. 1 is an example of a display screen implemented according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram illustrating an HMD system according to an embodiment of the present invention.
  • FIG. 3 shows an exemplary orthogonal coordinate system in a real space defined around the head of the user wearing the HMD shown in FIG.
  • FIG. 4 is a schematic diagram showing a plurality of detection points virtually provided on the HMD, which are detected by the position tracking camera.
  • FIG. 5 is a schematic diagram for presenting a user's surrounding environment in real space in an immersive virtual space according to an embodiment of the present invention.
  • FIG. 6 is a functional block diagram relating to the control circuit unit in the HMD system shown in FIG. FIG.
  • FIG. 7 is an exemplary process flow diagram for detecting the surrounding environment of a user according to an embodiment of the present invention.
  • FIG. 8 is an exemplary process flow diagram for detecting the surrounding environment of a user according to an embodiment of the present invention.
  • FIG. 9 is a screen example of the first embodiment relating to the presentation of the user's surrounding environment in the real space with respect to the immersive virtual space.
  • FIG. 10 is a screen example of the second embodiment regarding the presentation of the user's surrounding environment in the real space with respect to the immersive virtual space.
  • FIG. 11 is a screen example of the second embodiment relating to presentation of the user's surrounding environment in the real space with respect to the immersive virtual space.
  • FIG. 12 is a screen example of the second embodiment regarding the presentation of the user's surrounding environment in the real space with respect to the immersive virtual space.
  • FIG. 2 is an overall schematic diagram of an HMD system 100 using a head-mounted display (hereinafter referred to as “HMD”) in order to execute a computer program according to an embodiment of the present invention.
  • the HMD system 100 includes an HMD main body 110, a computer (control circuit unit) 120, a position tracking camera 130, and a real-time camera 140.
  • the HMD 110 includes a display 112 and a sensor 114.
  • the display 112 is a non-transmissive display device configured to completely cover the user's field of view, and the user observes only the image displayed on the display 112. Since the user wearing the non-transmissive HMD 110 loses all of the outside field of view, the display mode is such that the user is completely immersed in the three-dimensional virtual space displayed on the display 112 by the application executed in the control circuit unit 120. .
  • the sensor 114 included in the HMD 110 is fixed near the display 112.
  • the sensor 114 includes a geomagnetic sensor, an acceleration sensor, and / or a tilt (angle, gyro) sensor, and detects various movements of the HMD 110 (display 112) mounted on the user's head through one or more of them.
  • Can do Especially in the case of an angle sensor, as shown in FIG. 4, according to the movement of the HMD 110, the angle around the three axes of the HMD 110 is detected over time, and the time change of the angle (tilt) around each axis is determined. Can do.
  • XYZ coordinates are defined around the head of the user wearing the HMD.
  • the vertical direction in which the user stands upright is the Y axis
  • the direction orthogonal to the Y axis and connecting the center of the display 112 and the user is the Z axis
  • the axis in the direction orthogonal to the Y axis and the Z axis is the X axis.
  • an angle around each axis specifically, a yaw angle indicating rotation about the Y axis, a pitch angle indicating rotation about the X axis, and a roll angle indicating rotation about the Z axis Is detected, and the motion detection unit 220 determines angle (tilt) information data as view information based on the change over time.
  • the computer (control circuit unit) 120 included in the HMD system 100 is connected to the position tracking camera 130 and the real-time camera 140. And it functions as a control circuit device for immersing the user wearing the HMD into the three-dimensional virtual space and performing an operation based on the three-dimensional virtual space.
  • the control circuit unit 120 may be configured as hardware different from the HMD 110.
  • the hardware can be a computer such as a personal computer or a server computer via a network. That is, although not shown, any computer including a CPU, a main memory, an auxiliary memory, a transmission / reception unit, a display unit, and an input unit connected to each other by a bus can be used.
  • the control circuit unit 120 may be mounted inside the HMD 110 as a visual field adjustment device. In this case, the control circuit unit 120 can implement all or part of the functions of the visual field adjustment device. When only a part is implemented, the remaining functions may be implemented on the HMD 110 side or on a server computer (not shown) side via a network.
  • the position tracking camera 130 provided in the HMD system 100 is communicably connected to the control circuit unit 120 and has a position tracking function of the HMD 110.
  • the position tracking camera 130 is realized using an infrared sensor and / or a plurality of optical cameras.
  • the HMD system 100 includes a position tracking camera 130, and detects the position of the HMD of the user's head, thereby determining the position of the HMD in the real space and the virtual space position of the virtual camera / immersive user in the three-dimensional virtual space. It is possible to identify and associate accurately.
  • the position tracking camera 130 is virtually provided on the HMD 110, and the real space positions of a plurality of detection points that detect infrared rays are used as the user's movements. Corresponding detection over time. Then, based on the temporal change in the real space position detected by the position tracking camera 130, the position of the HMD in the real space and the virtual camera / immersive user virtual space in the three-dimensional virtual space according to the movement of the HMD 110. The position can be accurately associated and specified.
  • the position tracking camera 130 is an optional component. When the position tracking camera 130 is not used, the user is always arranged at the center (origin) in the three-dimensional virtual space.
  • the real-time camera 140 provided in the HMD system 100 captures and generates a real-time peripheral image of the user's real space, and stores the generated image in real time or at regular intervals.
  • the real-time camera may be a stereo camera that can record information in the depth direction by simultaneously photographing the surrounding environment from a plurality of different directions.
  • the real-time camera 140 has an interface such as USB or IEEE1394, and can transfer real-time images to the connection destination computer 120.
  • a network camera particularly a web camera, having a network interface and being locally / globally accessible via wired / wireless communication is preferable.
  • FIG. 5 is a schematic diagram for displaying information related to the surrounding environment of the real space on the HMD in association with the three-dimensional virtual space in order to present the surrounding environment of the user in the real space in the immersive virtual space.
  • the real-time camera 140 is installed on the upper part of the display of the computer 120, and the user 1 is placed with the HMD 110 attached so as to face it.
  • the three-dimensional image displayed on the user's HMD is simultaneously displayed on the display of the computer 120 as a two-dimensional image.
  • another two people (2, 3) are present at a close distance of the user 1 so as to look into the display of the computer 120.
  • the HMD user 1 since the HMD user 1 is immersed in the three-dimensional virtual space, the HMD user 1 cannot observe the state of the real space and has not yet noticed the existence of the people 2 and 3.
  • the HMD system notifies the HMD user 1 of the existence of the persons 2 and 3 by displaying information on the persons 2 and 3 in association with the three-dimensional virtual space on the HMD. More specifically, first, (i) the computer acquires an image of the real-time camera 140 in which the people 2 and 3 are reflected, and detects the people 2 and 3 from the images using a face recognition program. Next, (ii) according to the number of detected persons, for example, by displaying characters in a three-dimensional virtual space, the HMD user 1 is notified of the presence of two persons.
  • the face recognition / detection function in the face recognition program in (i) above those known to those skilled in the art may be used, and description thereof is omitted here. Even if the face recognition is used, the HMD user 1 is not detected as a human face because the upper half of the face is hidden by the HMD.
  • FIG. 6 shows a configuration of main functions of components related to the control circuit unit 120 of FIG. 2 in order to implement information processing for presenting the surrounding environment of the user in the real space in the immersive virtual space according to the embodiment of the present invention.
  • the control circuit unit 120 mainly receives an input from the sensor 114/130, processes the input together with the peripheral image acquired from the real-time camera 140, and outputs the input to the HMD (display) 112.
  • the control circuit unit 120 mainly includes a motion detection unit 210, a visual field determination unit 220, a visual field image generation unit 230, a peripheral image acquisition unit 250, an environment detection unit 260, and a spatial image superimposition unit 270. And it is comprised so that various information may be processed by interacting with various tables, such as the space information storage part 280 which stored virtual space information.
  • the motion detection unit 210 determines various types of motion data of the HMD 110 worn on the user's head based on the input of motion information measured by the sensor 114/130.
  • position information detected over time is determined by inclination (angle) information detected over time by an inclination sensor (gyro sensor) 114 included in the HMD.
  • the use of the position tracking camera 130 is arbitrary, and when not used, the user is always arranged at the center (origin) in the three-dimensional virtual space. Will do.
  • the position and direction of the virtual camera arranged in the three-dimensional virtual space based on the three-dimensional virtual space information stored in the spatial information storage unit 280 and the tilt information detected by the motion detection unit 210. And a field of view from the virtual camera is determined. Further, in order to display the three-dimensional virtual space image on the HMD, the view image generation unit 230 generates a partial view image of a 360-degree panorama for the view area determined by the view determination unit 220 using the virtual space information. can do. It is to be noted that the field-of-view image can be displayed on the HMD like a three-dimensional image by generating two two-dimensional images for the left eye and for the right eye and superimposing these two in the HMD. .
  • the peripheral image acquisition unit 250 captures peripheral images continuously captured by the real-time camera 140 into the storage unit. Then, the environment detection unit 260 detects the surrounding environment in the real space, particularly “changes” in the surrounding environment, using the respective surrounding images acquired by the surrounding image acquisition unit 250. More specifically, it is possible to detect the presence of a human face, particularly the increase or decrease in the number of people, from a peripheral image using a face recognition program. It should be understood that the execution of the face recognition program may be executed not as a function implemented on the control circuit unit 120 side but as a function implemented on the real-time camera 140 side.
  • the spatial image superimposing unit (output unit) 270 outputs information related to the surrounding environment detected by the environment detection unit 260 to the HMD (display) 112 in association with the virtual space information stored in the spatial information storage unit 300.
  • the number of detected people that is, the number of people in the surrounding environment is notified in the three-dimensional virtual space, and / or the detected person is used as a three-dimensional character in the three-dimensional virtual space.
  • the output of information related to the surrounding environment to the HMD is not limited to these.
  • a real-time camera 140 that can acquire depth information such as a plurality of stereo cameras the position information of the person to be detected can also be acquired.
  • the positional relationship between the HMD user and the person in the three-dimensional virtual space can be included as information about the surrounding environment.
  • FIG. 7 shows a processing flow in which the user changes the field of view in the immersive three-dimensional virtual space together with the user's HMD tilting operation as shown in FIGS. 1 (a) and 1 (b).
  • FIG. 8 shows a processing flow in which information related to the detected surrounding environment of the real space is displayed on the HMD in association with the three-dimensional virtual space.
  • the HMD 110 constantly detects the movement of the user's head using the sensors 114/130 as in step S10-1.
  • the control circuit unit 120 determines the tilt information and position information of the HMD 110 by the motion detection unit 210.
  • the motion detection unit 210 determines the position of the virtual camera arranged in the three-dimensional virtual space based on the position information.
  • the motion detection unit 210 also determines the position of the virtual camera. The direction of the virtual camera is determined based on the tilt information.
  • the position tracking camera 130 is not an essential component, and if it is not provided, step S20-1 is omitted and the position information need not be determined. .
  • step S20-3 the visual field determination unit 220 determines the visual field region from the virtual camera in the three-dimensional virtual space based on the position and orientation of the virtual camera and the predetermined viewing angle of the virtual camera.
  • step S20-4 the visual field image generation unit 230 generates a visual field image for the determined visual field region for display on the HMD 112.
  • step S10-2 the field-of-view image generated in step S20-4 is displayed on the display 112 of the HMD.
  • Steps S10-1, S20-1 to S20-4, and then S10-2 are a series of basic processing routines, and these steps are basically repeatedly processed during execution of the application.
  • a user who is immersed in the three-dimensional virtual space can view the field of view of the three-dimensional virtual space from various positions and directions through the operation of tilting his / her head.
  • the peripheral image acquisition unit 250 of the control circuit unit 120 regularly acquires a peripheral image from the real-time camera 140 in order to detect a person in the peripheral environment.
  • FIG. 8 shows a processing flow for displaying the information related to the detected surrounding environment of the real space on the HMD in association with the three-dimensional virtual space.
  • step S30-1 peripheral images are continuously generated through continuous camera shooting by the real-time camera 140.
  • a peripheral image is acquired by the peripheral image acquisition unit 250 of the control circuit unit 120
  • the peripheral environment is acquired using the peripheral image acquired by the environment detection unit 260. Is detected. In particular, it detects changes in the surrounding environment.
  • the face recognition function for the image is used to detect the people and the number of people present in the vicinity. In particular, when the number of people increases or decreases, it can be determined that the surrounding environment has changed, and it is advantageous to notify the user of the change.
  • step S20-8 the spatial image superimposing unit 270 generates information based on the surrounding environment and / or a corresponding image, and associates the information and / or image with the virtual space information stored in the virtual space information storage unit 300.
  • step S10-3 the information is displayed on the HMD 110. It should be understood that the information based on the surrounding environment includes, but is not limited to, the number of persons and character information.
  • FIG. 9 shows the first embodiment
  • FIGS. 10 to 12 show the second embodiment.
  • FIG. 9 shows a screen displayed by the virtual multi-display application shown in FIG. And it is an example in the case of one person specified by face recognition.
  • a message “There is one viewer.” Is displayed in the three-dimensional virtual space here (upper center of the screen).
  • the real space video from the real time camera is displayed as it is in the central portion that is the virtual main display, and is displayed on the HMD as a part of the virtual space image. That is, the situation where one person exists behind the HMD user and the face is photographed / detected by the real-time camera is displayed as it is in the three-dimensional virtual space and displayed on the HMD.
  • a user who is immersed in the three-dimensional virtual space can detect a change in the surrounding environment such as the presence of a person in the vicinity of the user even when the HMD is worn without being removed.
  • a real-time camera can be applied as a human detection sensor.
  • the character image is superimposed on the virtual space image so as to be reflected and displayed on the HMD.
  • the “bear” character corresponding to the person moves from left to right (arrow). It is better to display it on the HMD so that it is reflected.
  • FIG. 11 and FIG. 12 are examples of displaying two “Kuma” characters reflected while moving in this way on the HMD, and the HMD user is in the vicinity of two people in the vicinity of himself / herself. Changes in the environment can be detected with the HMD attached.
  • 11 shows a front view of the “Kuma” character
  • FIG. 12 shows a rear view.
  • HMD system 110 HMD 112 Display 114 Tilt sensor (gyro sensor) 120 Computer (Control circuit part) 130 Position tracking camera 140 Real-time camera 210 Motion detection unit 220 View determination unit 230 View image generation unit 250 Peripheral image acquisition unit 260 Environment detection unit 270 Spatial image superimposition (output) unit 280 Spatial information storage unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Computer Hardware Design (AREA)
  • Computer Graphics (AREA)
  • Optics & Photonics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The purpose of the present invention is, when a user is wearing a head-mounted display (HMD) and is immersed in a three-dimensional virtual space, to present, within the three-dimensional virtual space, information of a surrounding environment in real space, such that the user may sense a surrounding real space environment. This HMD system comprises: an HMD which a user wears, which displays a virtual space image which is generated on the basis of virtual space information, and which immerses the user in a virtual space; a realtime camera which generates an image of the user's real space surroundings; and a computer which is connected to the HMD and the realtime camera, said computer being configured to acquire the image of the surroundings from the realtime camera, use the acquired image of the surroundings to sense the surrounding real space environment, and output information relating to the surrounding environment to the HMD in association with the virtual space information.

Description

没入型仮想空間に実空間のユーザの周辺環境を提示するためのヘッドマウント・ディスプレイ・システムおよびコンピュータ・プログラムHead-mounted display system and computer program for presenting a user's surrounding environment in an immersive virtual space
 本発明は、ユーザがヘッドマウント・ディスプレイ(HMD)を頭部に装着し、3次元仮想空間に没入している状況において、実空間の周辺環境に関する情報を3次元仮想空間に関連付けてHMDに表示するようコンピュータに制御させるコンピュータを備えるヘッドマウント・ディスプレイ・システム、および当該コンピュータを機能させるためのコンピュータ・プログラムに関するものである。 According to the present invention, in a situation where a user wears a head-mounted display (HMD) on the head and is immersed in the three-dimensional virtual space, information related to the surrounding environment of the real space is displayed on the HMD in association with the three-dimensional virtual space. The present invention relates to a head mounted display system including a computer to be controlled by the computer, and a computer program for causing the computer to function.
 ユーザの頭部に装着され、眼前に配置されたディスプレイ等によってユーザに没入型の3次元仮想空間画像を提示可能なHMDが知られている。特に、特許文献1に開示されるようなHMDでは、3次元仮想空間に360度のパノラマ画像を表示可能である。このようなHMDは通例、各種センサ(例えば、加 度センサや角 度センサ)を備え、HMD本体の姿勢データを計測する。特に、頭部の回転角に関する情報に従ってパノラマ画像の視線方向変更を可能にする。即ち、HMDを装着したユーザが自身の頭部を回転させると、それに併せて、360度のパノラマ画像の視線方向を変更するのを可能にする。このようなHMDは、ユーザに対し、映像世界への没入感を高め、エンタテインメント性を向上させる。 There is known an HMD that is mounted on the user's head and can present an immersive three-dimensional virtual space image to the user by a display or the like disposed in front of the eyes. In particular, with an HMD as disclosed in Patent Document 1, a 360-degree panoramic image can be displayed in a three-dimensional virtual space. Such an HMD typically includes various sensors (for example, an angle sensor or an angle sensor) and measures posture data of the HMD main body. In particular, it is possible to change the line-of-sight direction of the panoramic image according to information on the rotation angle of the head. That is, when the user wearing the HMD rotates his / her head, it is possible to change the viewing direction of the panoramic image of 360 degrees. Such an HMD increases a sense of immersion in the video world and improves entertainment properties for the user.
特開2013-258614号公報JP 2013-258614 A
 しかしながら、特にユーザの視界を完全に覆う非透過型のHMDをユーザが装着することにより、3次元仮想空間への没入感が高まるほど、そのユーザは、実空間における自身の周りの環境が把握できなくなる。例えば、HMDを装着したユーザの至近距離に別の人がいても、ユーザはなかなかその存在に気づかないことも多い。 However, the user can understand the environment around him / herself in real space as the user feels more immersed in the three-dimensional virtual space by wearing a non-transparent HMD that completely covers the user's field of view. Disappear. For example, even if there is another person close to the user wearing the HMD, the user often does not notice the presence.
 ユーザが没入する3次元仮想空間の例として、図1に示す各画面例を想定する。例えば、図1(a)は、3次元仮想空間内の所定の視界領域に複数の仮想テレビが配置されるような仮想マルチディスプレイ・アプリケーションで用いられる画面である。また、図1(b)は、3次元仮想空間平面をユーザのキャラクタが動き回って敵キャラクタと対戦するようなアクション型RPGのゲーム・アプリケーションで用いられる画面である。 Suppose each screen example shown in FIG. 1 is an example of a three-dimensional virtual space in which the user is immersed. For example, FIG. 1A shows a screen used in a virtual multi-display application in which a plurality of virtual televisions are arranged in a predetermined view area in a three-dimensional virtual space. FIG. 1B is a screen used in an action-type RPG game application in which a user character moves around in a three-dimensional virtual space plane and battles an enemy character.
 本発明は、ユーザがHMDを装着して3次元仮想空間に没入している状態において、実空間での周辺環境を検知できるよう、周辺環境の情報を3次元仮想空間内に提示することにより、ユーザに知らせることを目的とする。 The present invention presents information on the surrounding environment in the three-dimensional virtual space so that the user can detect the surrounding environment in the real space when the user is wearing the HMD and is immersed in the three-dimensional virtual space. The purpose is to inform the user.
 上記の課題を解決するために、本発明によるヘッドマウント・ディスプレイ(HMD)システムは、仮想空間情報に基づいて生成される仮想空間画像を表示して、ユーザを仮想空間に没入させる、ユーザが装着するHMDと、ユーザの実空間の周辺画像を生成するリアルタイム・カメラと、HMDおよびリアルタイム・カメラに接続されるコンピュータであって、リアルタイム・カメラから周辺画像を取得し、該取得した周辺画像を用いて実空間の周辺環境を検知し、周辺環境に関する情報を仮想空間情報に関連付けて前記HMDに出力するように構成されるコンピュータと、を備える。 In order to solve the above problems, a head-mounted display (HMD) system according to the present invention displays a virtual space image generated based on virtual space information and immerses the user in the virtual space. An HMD that generates a peripheral image of a real space of the user, and a computer connected to the HMD and the real-time camera, and acquires the peripheral image from the real-time camera and uses the acquired peripheral image And a computer configured to detect the surrounding environment of the real space and output information related to the surrounding environment to the HMD in association with the virtual space information.
 また、当該HMDシステムにおいて、周辺画像を用いた上記周辺環境の検知が、周辺画像に対する人の顔認識による検知を含み、上記周辺環境に関する情報の出力が、周辺環境における人の人数の通知を含む。更に、周辺環境に関する情報の出力が、リアルタイム・カメラからの実空間の映像を、仮想空間に設けた仮想ディスプレイに表示させることにより、仮想空間画像の一部としてHMDに表示することを含む。加えて、周辺環境に関する情報の出力が、人に対応するキャラクタを仮想空間に配置するように、キャラクタ画像を仮想空間画像に重畳することを含む。 In the HMD system, the detection of the surrounding environment using the surrounding image includes detection by human face recognition on the surrounding image, and the output of information on the surrounding environment includes notification of the number of people in the surrounding environment. . Further, the output of information related to the surrounding environment includes displaying the real space video from the real time camera on the HMD as a part of the virtual space image by displaying the video on the virtual display provided in the virtual space. In addition, the output of information related to the surrounding environment includes superimposing the character image on the virtual space image so that the character corresponding to the person is placed in the virtual space.
 本発明による、没入型の仮想空間に実空間のユーザの周辺環境を提示するコンピュータ・プログラムでは、コンピュータにリアルタイム・カメラおよびヘッドマウント・ディスプレイ(HMD)が接続されており、リアルタイム・カメラからユーザの実空間の周辺画像を取得する周辺画像取得部と、取得した周辺画像を用いて実空間の周辺環境を検知する環境検知部と、検知した周辺環境に関する情報を仮想空間情報に関連付けてHMDに出力する出力部と、として上記コンピュータに機能させる。 In the computer program for presenting a user's surrounding environment in an immersive virtual space according to the present invention, a real-time camera and a head-mounted display (HMD) are connected to the computer. A peripheral image acquisition unit that acquires a peripheral image of the real space, an environment detection unit that detects the peripheral environment of the real space using the acquired peripheral image, and outputs information about the detected peripheral environment to the HMD in association with the virtual space information The above-mentioned computer is made to function as an output unit.
 本発明によれば、実空間における周辺環境の情報を3次元仮想空間内に提示することにより、実空間の周辺環境を撮影するためのリアルタイム・カメラを周辺環境検知センサ、特に人検知センサとして適用可能とし、3次元仮想空間に没入した状態にあるユーザは、HMDを脱着することなく、装着したままで自身の周辺環境の変化を検知可能となる。 According to the present invention, a real-time camera for photographing a surrounding environment in a real space is presented as a surrounding environment detection sensor, particularly a human detection sensor by presenting information on the surrounding environment in the real space in a three-dimensional virtual space. It is possible for a user who is immersed in the three-dimensional virtual space to detect a change in his / her surrounding environment without removing the HMD.
図1は、本発明の実施形態により実装される表示画面例である。FIG. 1 is an example of a display screen implemented according to an embodiment of the present invention. 図2は、本発明の実施形態によるHMDシステムを示した模式図である。FIG. 2 is a schematic diagram illustrating an HMD system according to an embodiment of the present invention. 図3は、図2に示したHMDを装着したユーザの頭部を中心にして規定される実空間における例示の直交座標系を示す。FIG. 3 shows an exemplary orthogonal coordinate system in a real space defined around the head of the user wearing the HMD shown in FIG. 図4は、ポジション・トラッキング・カメラによって検知される、HMD上に仮想的に設けられた複数の検知点を示す概要図である。FIG. 4 is a schematic diagram showing a plurality of detection points virtually provided on the HMD, which are detected by the position tracking camera. 図5は、本発明の実施形態により、没入型仮想空間に実空間のユーザの周辺環境を提示するための模式図である。FIG. 5 is a schematic diagram for presenting a user's surrounding environment in real space in an immersive virtual space according to an embodiment of the present invention. 図6は、図2に示したHMDシステムにおける制御回路部に関する機能ブロック図である。FIG. 6 is a functional block diagram relating to the control circuit unit in the HMD system shown in FIG. 図7は、本発明の実施形態により、ユーザの周辺環境検知を実施するための例示の処理フロー図である。FIG. 7 is an exemplary process flow diagram for detecting the surrounding environment of a user according to an embodiment of the present invention. 図8は、本発明の実施形態により、ユーザの周辺環境検知を実施するための例示の処理フロー図である。FIG. 8 is an exemplary process flow diagram for detecting the surrounding environment of a user according to an embodiment of the present invention. 図9は、没入型仮想空間に対する実空間のユーザ周辺環境の提示に関する第1実施例の画面例である。FIG. 9 is a screen example of the first embodiment relating to the presentation of the user's surrounding environment in the real space with respect to the immersive virtual space. 図10は、没入型仮想空間に対する実空間のユーザ周辺環境の提示に関する第2実施例の画面例である。FIG. 10 is a screen example of the second embodiment regarding the presentation of the user's surrounding environment in the real space with respect to the immersive virtual space. 図11は、没入型仮想空間に対する実空間のユーザ周辺環境の提示に関する第2実施例の画面例である。FIG. 11 is a screen example of the second embodiment relating to presentation of the user's surrounding environment in the real space with respect to the immersive virtual space. 図12は、没入型仮想空間に対する実空間のユーザ周辺環境の提示に関する第2実施例の画面例である。FIG. 12 is a screen example of the second embodiment regarding the presentation of the user's surrounding environment in the real space with respect to the immersive virtual space.
 以下に、図面を参照して、本発明の実施形態による、没入型仮想空間に実空間のユーザの周辺環境を提示するためのヘッドマウント・ディスプレイ・システムおよびコンピュータ・プログラムについて説明する。図中、同一の構成要素には同一の符号を付してある。 Hereinafter, a head-mounted display system and a computer program for presenting a user's surrounding environment in an immersive virtual space according to an embodiment of the present invention will be described with reference to the drawings. In the figure, the same components are denoted by the same reference numerals.
 図2は、本発明の実施形態によるコンピュータ・プログラムを実施するために、ヘッドマウント・ディスプレイ(以下、「HMD」と称する。)を用いたHMDシステム100の全体概略図である。図示のように、HMDシステム100は、HMD本体110、コンピュータ(制御回路部)120、ポジション・トラッキング・カメラ130、およびリアルタイム・カメラ140を備える。 FIG. 2 is an overall schematic diagram of an HMD system 100 using a head-mounted display (hereinafter referred to as “HMD”) in order to execute a computer program according to an embodiment of the present invention. As illustrated, the HMD system 100 includes an HMD main body 110, a computer (control circuit unit) 120, a position tracking camera 130, and a real-time camera 140.
 HMD110は、ディスプレイ112およびセンサ114を具備する。ディスプレイ112は、ユーザの視界を完全に覆うよう構成された非透過型の表示装置であり、ユーザはディスプレイ112に表示される画像のみを観察することになる。非透過型HMD110を装着したユーザは、外界の視界を全て失うため、制御回路部120において実行されるアプリケーションによってディスプレイ112に表示される3次元仮想空間内に完全に没入するような表示態様となる。 The HMD 110 includes a display 112 and a sensor 114. The display 112 is a non-transmissive display device configured to completely cover the user's field of view, and the user observes only the image displayed on the display 112. Since the user wearing the non-transmissive HMD 110 loses all of the outside field of view, the display mode is such that the user is completely immersed in the three-dimensional virtual space displayed on the display 112 by the application executed in the control circuit unit 120. .
 HMD110が具備するセンサ114は、ディスプレイ112近辺に固定される。センサ114は、地磁気センサ、加速度センサ、および/または傾き(角度、ジャイロ)センサを含み、これらの1つ以上を通じて、ユーザの頭部に装着されたHMD110(ディスプレイ112)の各種動きを検知することができる。特に角度センサの場合には、図4のように、HMD110の動きに応じて、HMD110の3軸回りの角度を経時的に検知し、各軸回りの角度(傾き)の時間変化を決定することができる。 The sensor 114 included in the HMD 110 is fixed near the display 112. The sensor 114 includes a geomagnetic sensor, an acceleration sensor, and / or a tilt (angle, gyro) sensor, and detects various movements of the HMD 110 (display 112) mounted on the user's head through one or more of them. Can do. Especially in the case of an angle sensor, as shown in FIG. 4, according to the movement of the HMD 110, the angle around the three axes of the HMD 110 is detected over time, and the time change of the angle (tilt) around each axis is determined. Can do.
 そこで、図3を参照して傾きセンサで検知可能な角度情報データについて具体的に説明する。図示のように、HMDを装着したユーザの頭部を中心として、XYZ座標が規定される。ユーザが直立する垂直方向をY軸、Y軸と直交しディスプレイ112の中心とユーザを結ぶ方向をZ軸、Y軸およびZ軸と直交する方向の軸をX軸とする。傾きセンサでは、各軸回りの角度、具体的にはY軸を軸とした回転を示すヨー角、X軸を軸とした回転を示すピッチ角、およびZ軸を軸とした回転を示すロール角で決定される傾きを検知し、その経時的な変化により、動き検知部220が視界情報として角度(傾き)情報データを決定する。 Therefore, the angle information data that can be detected by the tilt sensor will be specifically described with reference to FIG. As illustrated, XYZ coordinates are defined around the head of the user wearing the HMD. The vertical direction in which the user stands upright is the Y axis, the direction orthogonal to the Y axis and connecting the center of the display 112 and the user is the Z axis, and the axis in the direction orthogonal to the Y axis and the Z axis is the X axis. In the tilt sensor, an angle around each axis, specifically, a yaw angle indicating rotation about the Y axis, a pitch angle indicating rotation about the X axis, and a roll angle indicating rotation about the Z axis Is detected, and the motion detection unit 220 determines angle (tilt) information data as view information based on the change over time.
 図2に戻り、HMDシステム100が備えるコンピュータ(制御回路部)120は、ポジション・トラッキング・カメラ130およびリアルタイム・カメラ140に接続される。そして、HMDを装着したユーザを3次元仮想空間に没入させ、3次元仮想空間に基づく動作を実施させるための制御回路装置として機能する。図示のように、制御回路部120は、HMD110とは別のハードウェアとして構成してよい。当該ハードウェアは、パーソナルコンピュータやネットワークを通じたサーバ・コンピュータのようなコンピュータとすることができる。即ち、図示はしないが、相互にバス接続されたCPU、主記憶、補助記憶、送受信部、表示部、および入力部を備える任意のコンピュータとすることができる。代替として、制御回路部120は、視界調整装置としてHMD110内部に搭載されてもよい。この場合は、制御回路部120は、視界調整装置の全部または一部の機能のみを実装することができる。一部のみを実装した場合には、残りの機能をHMD110側、またはネットワークを通じたサーバ・コンピュータ(非図示)側に実装してもよい。 2, the computer (control circuit unit) 120 included in the HMD system 100 is connected to the position tracking camera 130 and the real-time camera 140. And it functions as a control circuit device for immersing the user wearing the HMD into the three-dimensional virtual space and performing an operation based on the three-dimensional virtual space. As illustrated, the control circuit unit 120 may be configured as hardware different from the HMD 110. The hardware can be a computer such as a personal computer or a server computer via a network. That is, although not shown, any computer including a CPU, a main memory, an auxiliary memory, a transmission / reception unit, a display unit, and an input unit connected to each other by a bus can be used. Alternatively, the control circuit unit 120 may be mounted inside the HMD 110 as a visual field adjustment device. In this case, the control circuit unit 120 can implement all or part of the functions of the visual field adjustment device. When only a part is implemented, the remaining functions may be implemented on the HMD 110 side or on a server computer (not shown) side via a network.
 更に、HMDシステム100が備えるポジション・トラッキング・カメラ130は、制御回路部120に通信可能に接続され、HMD110の位置追跡機能を有する。ポジション・トラッキング・カメラ130は、赤外線センサおよび/または複数の光学カメラを用いて実現される。HMDシステム100は、ポジション・トラッキング・カメラ130を具備し、ユーザ頭部のHMDの位置を検知することによって、実空間のHMDの位置および3次元仮想空間における仮想カメラ/没入ユーザの仮想空間位置を正確に対応付けて、特定することができる。 Furthermore, the position tracking camera 130 provided in the HMD system 100 is communicably connected to the control circuit unit 120 and has a position tracking function of the HMD 110. The position tracking camera 130 is realized using an infrared sensor and / or a plurality of optical cameras. The HMD system 100 includes a position tracking camera 130, and detects the position of the HMD of the user's head, thereby determining the position of the HMD in the real space and the virtual space position of the virtual camera / immersive user in the three-dimensional virtual space. It is possible to identify and associate accurately.
 より具体的には、ポジション・トラッキング・カメラ130は、図4に例示的に示すように、HMD110上に仮想的に設けられ、赤外線を検知する複数の検知点の実空間位置をユーザの動きに対応して経時的に検知する。そして、ポジション・トラッキング・カメラ130により検知された実空間位置の経時的変化に基づいて、HMD110の動きに応じて、実空間のHMDの位置および3次元仮想空間における仮想カメラ/没入ユーザの仮想空間位置を正確に対応付け、特定することができる。なお、本発明の実施の形態では、ポジション・トラッキング・カメラ130は任意の構成要素である。ポジション・トラッキング・カメラ130を用いない場合には、3次元仮想空間においてユーザを常に中心(原点)に配置するように構成することになる。 More specifically, as shown in FIG. 4, the position tracking camera 130 is virtually provided on the HMD 110, and the real space positions of a plurality of detection points that detect infrared rays are used as the user's movements. Corresponding detection over time. Then, based on the temporal change in the real space position detected by the position tracking camera 130, the position of the HMD in the real space and the virtual camera / immersive user virtual space in the three-dimensional virtual space according to the movement of the HMD 110. The position can be accurately associated and specified. In the embodiment of the present invention, the position tracking camera 130 is an optional component. When the position tracking camera 130 is not used, the user is always arranged at the center (origin) in the three-dimensional virtual space.
 再度図2に戻り、HMDシステム100が備えるリアルタイム・カメラ140は、ユーザの実空間の周辺画像をリアルタイムで撮影して生成すると共に、生成した画像をリアルタイムまたは一定間隔で記憶する。リアルタイム・カメラは、周辺環境を複数の異なる方向から同時に撮影することにより、その奥行き方向の情報も記録できるようにしたステレオ・カメラとしてもよい。そして、リアルタイム・カメラ140は、USBやIEEE1394等のインタフェースを有し、接続先のコンピュータ120にリアルタイム画像転送を可能である。特に、ネットワーク・インタフェースを有し、有線/無線通信を通じてローカル/グローバルにアクセス可能なネットワーク・カメラ、特にWebカメラとするのがよい。 2 again, the real-time camera 140 provided in the HMD system 100 captures and generates a real-time peripheral image of the user's real space, and stores the generated image in real time or at regular intervals. The real-time camera may be a stereo camera that can record information in the depth direction by simultaneously photographing the surrounding environment from a plurality of different directions. The real-time camera 140 has an interface such as USB or IEEE1394, and can transfer real-time images to the connection destination computer 120. In particular, a network camera, particularly a web camera, having a network interface and being locally / globally accessible via wired / wireless communication is preferable.
 これより図5以降を参照して、本発明の実施形態により、没入型仮想空間に実空間のユーザの周辺環境を提示するための各情報処理について説明する。図5は、没入型仮想空間に実空間のユーザの周辺環境を提示するために、実空間の周辺環境に関する情報を3次元仮想空間に関連付けてHMDに表示するための模式図である。 Now, with reference to FIG. 5 and subsequent drawings, each information processing for presenting the surrounding environment of the user in the real space in the immersive virtual space according to the embodiment of the present invention will be described. FIG. 5 is a schematic diagram for displaying information related to the surrounding environment of the real space on the HMD in association with the three-dimensional virtual space in order to present the surrounding environment of the user in the real space in the immersive virtual space.
 図示のように、コンピュータ120のディスプレイの上部にリアルタイム・カメラ140が設置され、それと向かい合うようにユーザ1がHMD110を装着して配置される。ユーザのHMDに表示される3次元画像は、同時に、2次元画像としてコンピュータ120のディスプレイに表示されている。そして、コンピュータ120の当該ディスプレイを覗きこむように別の2人(2,3)がユーザ1の至近距離に存在する。この段階では、HMDユーザ1は、3次元仮想空間に没入した状態のため実空間の様子を観察できず、未だ人2,3の存在に気づいていない。 As shown in the figure, the real-time camera 140 is installed on the upper part of the display of the computer 120, and the user 1 is placed with the HMD 110 attached so as to face it. The three-dimensional image displayed on the user's HMD is simultaneously displayed on the display of the computer 120 as a two-dimensional image. Then, another two people (2, 3) are present at a close distance of the user 1 so as to look into the display of the computer 120. At this stage, since the HMD user 1 is immersed in the three-dimensional virtual space, the HMD user 1 cannot observe the state of the real space and has not yet noticed the existence of the people 2 and 3.
 本発明の実施形態によるHMDシステムは、HMDユーザ1に対し、人2,3に関する情報を3次元仮想空間に関連付けてHMDに表示することによって人2,3の存在を知らせる。より具体的には、まず、(i)人2,3が映り込んだリアルタイム・カメラ140の画像をコンピュータが取得し、顔認識プログラムを用いて当該画像から人2,3を検知する。次いで、(ii)検知した人の数に応じて、例えば、3次元仮想空間にキャラクタ表示を行うことによって、HMDユーザ1に2人の存在を知らせる。なお、上記(i)での顔認識プログラムにおける顔認識・検知機能は、当業者にとって公知のものを用いてよく、ここでは説明を省略する。また、当該顔認識を用いても、HMDユーザ1自身はHMDにより顔面の上半分が隠れた状態のため、人の顔としては検知されることはない。 The HMD system according to the embodiment of the present invention notifies the HMD user 1 of the existence of the persons 2 and 3 by displaying information on the persons 2 and 3 in association with the three-dimensional virtual space on the HMD. More specifically, first, (i) the computer acquires an image of the real-time camera 140 in which the people 2 and 3 are reflected, and detects the people 2 and 3 from the images using a face recognition program. Next, (ii) according to the number of detected persons, for example, by displaying characters in a three-dimensional virtual space, the HMD user 1 is notified of the presence of two persons. As the face recognition / detection function in the face recognition program in (i) above, those known to those skilled in the art may be used, and description thereof is omitted here. Even if the face recognition is used, the HMD user 1 is not detected as a human face because the upper half of the face is hidden by the HMD.
 図6は、本発明の実施形態により、没入型仮想空間に実空間のユーザの周辺環境を提示する情報処理を実装するために、図2の制御回路部120に関するコンポーネントの主要機能の構成を示したブロック図である。制御回路部120では、主に、センサ114/130からの入力を受け、当該入力をリアルタイム・カメラ140から取得した周辺画像と共に処理を実施して、HMD(ディスプレイ)112への出力を行う。制御回路部120は、主に、動き検知部210、視界決定部220および視界画像生成部230、並びに、周辺画像取得部250、環境検知部260、および空間画像重畳部270を含む。そして、仮想空間情報を格納した空間情報格納部280等の各種テーブルと相互作用することにより各種情報を処理するように構成される。 FIG. 6 shows a configuration of main functions of components related to the control circuit unit 120 of FIG. 2 in order to implement information processing for presenting the surrounding environment of the user in the real space in the immersive virtual space according to the embodiment of the present invention. FIG. The control circuit unit 120 mainly receives an input from the sensor 114/130, processes the input together with the peripheral image acquired from the real-time camera 140, and outputs the input to the HMD (display) 112. The control circuit unit 120 mainly includes a motion detection unit 210, a visual field determination unit 220, a visual field image generation unit 230, a peripheral image acquisition unit 250, an environment detection unit 260, and a spatial image superimposition unit 270. And it is comprised so that various information may be processed by interacting with various tables, such as the space information storage part 280 which stored virtual space information.
 動き検知部210では、センサ114/130で測定された動き情報の入力に基づいて、ユーザの頭部に装着されたHMD110の各種動きデータを決定する。本発明の実施形態では、特に、HMDが具備する傾きセンサ(ジャイロ・センサ)114により経時的に検知される傾き(角度)情報により経時的に検知される位置情報を決定する。なお、本発明の実施形態では、上記のとおり、ポジション・トラッキング・カメラ130の使用は任意であり、使用しない場合には、3次元仮想空間においてユーザを常に中心(原点)に配置するように構成することになる。 The motion detection unit 210 determines various types of motion data of the HMD 110 worn on the user's head based on the input of motion information measured by the sensor 114/130. In the embodiment of the present invention, in particular, position information detected over time is determined by inclination (angle) information detected over time by an inclination sensor (gyro sensor) 114 included in the HMD. In the embodiment of the present invention, as described above, the use of the position tracking camera 130 is arbitrary, and when not used, the user is always arranged at the center (origin) in the three-dimensional virtual space. Will do.
 視界決定部220では、空間情報格納部280に格納された3次元仮想空間情報、並びに、動き検知部210で検知された傾き情報に基づいて、3次元仮想空間に配置した仮想カメラの位置、方向、そして、該仮想カメラからの視界領域を決定する。また、視界画像生成部230では、HMDに3次元仮想空間画像を表示するために、視界決定部220で決定した視界領域に対する360度パノラマの一部の視界画像を、仮想空間情報を用いて生成することができる。なお、視界画像は、2次元画像を左目用と右目用の2つを生成し、HMDにおいてこれら2つを重畳させて生成することで、ユーザには3次元画像の如くHMDに表示可能である。 In the field of view determination unit 220, the position and direction of the virtual camera arranged in the three-dimensional virtual space based on the three-dimensional virtual space information stored in the spatial information storage unit 280 and the tilt information detected by the motion detection unit 210. And a field of view from the virtual camera is determined. Further, in order to display the three-dimensional virtual space image on the HMD, the view image generation unit 230 generates a partial view image of a 360-degree panorama for the view area determined by the view determination unit 220 using the virtual space information. can do. It is to be noted that the field-of-view image can be displayed on the HMD like a three-dimensional image by generating two two-dimensional images for the left eye and for the right eye and superimposing these two in the HMD. .
 一方、周辺画像取得部250では、リアルタイム・カメラ140によって連続的に撮影された周辺画像を記憶部に取り込む。そして、環境検知部260では、周辺画像取得部250で取得した各周辺画像を用いて実空間の周辺環境、特に周辺環境の「変化」を検知する。より具体的には、顔認識プログラムを用いて周辺画像から人の顔の存在、特に人数の増減を検知することができる。なお、顔認識プログラムの実行は、制御回路部120側に実装する機能としてではなく、リアルタイム・カメラ140側に実装する機能として実行してもよいことが理解されるべきである。 On the other hand, the peripheral image acquisition unit 250 captures peripheral images continuously captured by the real-time camera 140 into the storage unit. Then, the environment detection unit 260 detects the surrounding environment in the real space, particularly “changes” in the surrounding environment, using the respective surrounding images acquired by the surrounding image acquisition unit 250. More specifically, it is possible to detect the presence of a human face, particularly the increase or decrease in the number of people, from a peripheral image using a face recognition program. It should be understood that the execution of the face recognition program may be executed not as a function implemented on the control circuit unit 120 side but as a function implemented on the real-time camera 140 side.
 空間画像重畳部(出力部)270では、環境検知部260で検知した周辺環境に関する情報を、空間情報格納部300に格納された仮想空間情報に関連付けてHMD(ディスプレイ)112に出力する。周辺環境に関する情報の出力には、検知された人の数、即ち、周辺環境における人数を3次元仮想空間内で通知し、および/または検知された人を3次元仮想空間内において3次元キャラクタとして配置するように、これらの画像を空間画像に重畳させてHMD112に表示することを含む。 The spatial image superimposing unit (output unit) 270 outputs information related to the surrounding environment detected by the environment detection unit 260 to the HMD (display) 112 in association with the virtual space information stored in the spatial information storage unit 300. In outputting information about the surrounding environment, the number of detected people, that is, the number of people in the surrounding environment is notified in the three-dimensional virtual space, and / or the detected person is used as a three-dimensional character in the three-dimensional virtual space. Including superimposing these images on the spatial image and displaying them on the HMD 112 so as to arrange them.
 しかしながら、周辺環境に関する情報のHMDへの出力はこれらに限定されない。例えば、リアルタイム・カメラ140として複数のステレオ・カメラのように奥行き情報を取得可能なものを採用した場合は、検知する人の位置情報も取得可能であるため、ポジション・トラッキング・カメラ130と併せて使用することで、3次元仮想空間におけるHMDユーザと人の位置関係についても、周辺環境に関する情報として有することができる。 However, the output of information related to the surrounding environment to the HMD is not limited to these. For example, when a real-time camera 140 that can acquire depth information such as a plurality of stereo cameras is adopted, the position information of the person to be detected can also be acquired. By using it, the positional relationship between the HMD user and the person in the three-dimensional virtual space can be included as information about the surrounding environment.
 次に、図7および図8を参照して、本発明の実施形態による、没入型仮想空間に実空間のユーザの周辺環境を提示する処理フローについて説明する。本情報処理は、主に、HMD110とコンピュータ(制御回路部)120との間の相互作用(図7)、および、リアルタイム・カメラ140とHMD110とコンピュータ(制御回路部)120との間の相互作用(図8)を通じて実施されることが理解される。図7は、図1(a)および図1(b)に示したように、ユーザのHMD傾け動作と共に没入型3次元仮想空間においてユーザが視界を変更しながら視界画像を表示する処理フローについて示す。一方、図8は、これに引き続いて、検知した実空間の周囲環境に関する情報を3次元仮想空間に関連付けてHMDに表示する処理フローについて示す。 Next, with reference to FIG. 7 and FIG. 8, a processing flow for presenting the surrounding environment of the user in the real space in the immersive virtual space according to the embodiment of the present invention will be described. This information processing mainly includes the interaction between the HMD 110 and the computer (control circuit unit) 120 (FIG. 7), and the interaction between the real-time camera 140, the HMD 110, and the computer (control circuit unit) 120. It is understood that this is implemented through (FIG. 8). FIG. 7 shows a processing flow in which the user changes the field of view in the immersive three-dimensional virtual space together with the user's HMD tilting operation as shown in FIGS. 1 (a) and 1 (b). . On the other hand, FIG. 8 shows a processing flow in which information related to the detected surrounding environment of the real space is displayed on the HMD in association with the three-dimensional virtual space.
 図7では、まず、HMD110は、ステップS10-1のように、定常的に各センサ114/130を用いてユーザ頭部の動きを検知している。それを受けて、制御回路部120では、動き検知部210によってHMD110の傾き情報および位置情報を決定する。そして、ステップS20-1では、動き検知部210により、位置情報に基づいて3次元仮想空間内に配置される仮想カメラの位置を決定すると共に、ステップS20-2では、同じく動き検知部210により、傾き情報に基づき仮想カメラの向きを決定する。なお、上述したように、本実施の形態では、ポジション・トラッキング・カメラ130は必須の構成要素ではなく、設けない場合には、ステップS20-1は省略され、位置情報の決定は行わなくてよい。 In FIG. 7, first, the HMD 110 constantly detects the movement of the user's head using the sensors 114/130 as in step S10-1. In response to this, the control circuit unit 120 determines the tilt information and position information of the HMD 110 by the motion detection unit 210. In step S20-1, the motion detection unit 210 determines the position of the virtual camera arranged in the three-dimensional virtual space based on the position information. In step S20-2, the motion detection unit 210 also determines the position of the virtual camera. The direction of the virtual camera is determined based on the tilt information. As described above, in the present embodiment, the position tracking camera 130 is not an essential component, and if it is not provided, step S20-1 is omitted and the position information need not be determined. .
 引き続き、ステップS20-3では、視界決定部220により、仮想カメラの位置および向き、並びに仮想カメラの所定の視野角度に基づいて、3次元仮想空間内の仮想カメラからの視界領域を決定する。そして、ステップS20-4では、視界画像生成部230により、HMD112に表示するために、上記決定した視界領域に対する視界画像を生成する。次いで、ステップS10-2に進み、ステップS20-4で生成した視界画像をHMDのディスプレイ112に表示する。 Subsequently, in step S20-3, the visual field determination unit 220 determines the visual field region from the virtual camera in the three-dimensional virtual space based on the position and orientation of the virtual camera and the predetermined viewing angle of the virtual camera. In step S20-4, the visual field image generation unit 230 generates a visual field image for the determined visual field region for display on the HMD 112. Next, the process proceeds to step S10-2, and the field-of-view image generated in step S20-4 is displayed on the display 112 of the HMD.
 上記ステップS10-1、S20-1からS20-4、次いでS10-2が一連の基本処理ルーチンであり、アプリケーション実行中は、これらステップは基本的に繰り返し処理される。3次元仮想空間に没入したユーザにとっては、通常の動作モードとして、自身の頭部を傾ける動作を通じて、様々な位置および方向からの3次元仮想空間の視界をビュー可能である。他方、このような基本処理ルーチンと併せて、制御回路部120の周辺画像取得部250では、周辺環境に人を検知するために、リアルタイム・カメラ140から周辺画像を定常的に取得している。そこで、図8に、検知した実空間の周囲環境に関する情報を3次元仮想空間に関連付けてHMDに表示する処理フローについて示す。 Steps S10-1, S20-1 to S20-4, and then S10-2 are a series of basic processing routines, and these steps are basically repeatedly processed during execution of the application. As a normal operation mode, a user who is immersed in the three-dimensional virtual space can view the field of view of the three-dimensional virtual space from various positions and directions through the operation of tilting his / her head. On the other hand, along with such a basic processing routine, the peripheral image acquisition unit 250 of the control circuit unit 120 regularly acquires a peripheral image from the real-time camera 140 in order to detect a person in the peripheral environment. FIG. 8 shows a processing flow for displaying the information related to the detected surrounding environment of the real space on the HMD in association with the three-dimensional virtual space.
 ステップS30-1では、リアルタイム・カメラ140による連続的なカメラ撮影を通じて、周辺画像が連続的に生成される。これに応じて、ステップS20-5では、制御回路部120の周辺画像取得部250により周辺画像を取得すると共に、ステップS20-6では、環境検知部260により取得済みの周辺画像を用いて周辺環境を検知する。特に、周辺環境の変化を検知する。具体的には、画像に対する顔認識機能を用いて周辺に存在する人や人数を検知する。特に、人数が増減した場合には、周辺環境が変化したものと特定することができ、変化についてユーザに通知するのが有利となる。 In step S30-1, peripheral images are continuously generated through continuous camera shooting by the real-time camera 140. Accordingly, in step S20-5, a peripheral image is acquired by the peripheral image acquisition unit 250 of the control circuit unit 120, and in step S20-6, the peripheral environment is acquired using the peripheral image acquired by the environment detection unit 260. Is detected. In particular, it detects changes in the surrounding environment. Specifically, the face recognition function for the image is used to detect the people and the number of people present in the vicinity. In particular, when the number of people increases or decreases, it can be determined that the surrounding environment has changed, and it is advantageous to notify the user of the change.
 ステップS20-7で環境検知部260により人の検知があった場合には、引き続きステップS20-8に進む。ステップS20-8では、空間画像重畳部270によって、周辺環境に基づく情報および/または対応する画像を生成し、仮想空間情報格納部300に格納された仮想空間情報に上記情報および/または画像を関連付けて、ステップS10-3においてHMD110に表示する。上記周辺環境に基づく情報には、人数やキャラクタ情報が含まれるがこれに限定されないことが理解されるべきである。 If a person is detected by the environment detection unit 260 in step S20-7, the process continues to step S20-8. In step S20-8, the spatial image superimposing unit 270 generates information based on the surrounding environment and / or a corresponding image, and associates the information and / or image with the virtual space information stored in the virtual space information storage unit 300. In step S10-3, the information is displayed on the HMD 110. It should be understood that the information based on the surrounding environment includes, but is not limited to, the number of persons and character information.
 ステップS10-3でのHMD110への表示について、以下に、図9~図12に示す幾らかの実施例を用いて説明する。ここでは、具体的な表示態様として、図9に第1実施例を、図10~図12に第2実施例を示す。 The display on the HMD 110 in step S10-3 will be described below using some examples shown in FIGS. Here, as a specific display mode, FIG. 9 shows the first embodiment, and FIGS. 10 to 12 show the second embodiment.
 図9は、図1(a)に示した仮想マルチディスプレイ・アプリケーションで表示される画面である。そして、顔認識により特定された人が1人の場合の例である。図示のように、ここでは「閲覧者が1名います。」というメッセージを3次元仮想空間内にメッセージ表示する(画面中央上部)。それと同時に、図9では、仮想メイン・ディスプレイとなる中央部分に、リアルタイム・カメラからの実空間の映像をそのまま表示させることにより、仮想空間画像の一部としてHMDに表示する。即ち、HMDユーザの後方に一人が存在し、その顔がリアルタイム・カメラで撮影・検知されている状況がそのまま3次元仮想空間内で映されHMDに表示される。第1実施例では、このように、3次元仮想空間に没入状態にあるユーザは、HMDを脱着することなく装着したままでも、自身の近辺に人が存在するといった周辺環境の変化を検知可能となる。つまり、本実施例では、リアルタイム・カメラを人検知センサとして適用可能である。 FIG. 9 shows a screen displayed by the virtual multi-display application shown in FIG. And it is an example in the case of one person specified by face recognition. As shown in the figure, a message “There is one viewer.” Is displayed in the three-dimensional virtual space here (upper center of the screen). At the same time, in FIG. 9, the real space video from the real time camera is displayed as it is in the central portion that is the virtual main display, and is displayed on the HMD as a part of the virtual space image. That is, the situation where one person exists behind the HMD user and the face is photographed / detected by the real-time camera is displayed as it is in the three-dimensional virtual space and displayed on the HMD. In the first embodiment, in this way, a user who is immersed in the three-dimensional virtual space can detect a change in the surrounding environment such as the presence of a person in the vicinity of the user even when the HMD is worn without being removed. Become. That is, in this embodiment, a real-time camera can be applied as a human detection sensor.
 図10~図12は、リアルタイム・カメラを用いて人を検知した場合に、当該人に対応するキャラクタを3次元仮想空間内に配置するようキャラクタ画像を仮想空間画像に重畳して、HMDに表示するように構成した第2実施例である。実空間でリアルタイム・カメラによって人の顔が検知された場合、検知された人の顔が周辺画像上のどのエリア(例えば、画像上の右/左、右上/右下/左上/左下のエリア等)かについて同時に特定する。そして、3次元仮想空間において、当該エリアに対応する方向(例えば、左方向/右方向、右上方向/右下方向/左上方向/左下方向等)から、人に対応した「くま」のキャラクタが移動しながら映り込むようにキャラクタ画像を仮想空間画像に重畳して、HMDに表示する。図10の例では、図9のようにリアルタイム・カメラから見て左方向に人が存在するために、当該人に対応する「くま」のキャラクタが、右方向に左から駆け足で移動(矢印)しながら映り込むようにHMDに表示するのがよい。 10 to 12, when a person is detected using a real-time camera, the character image is superimposed on the virtual space image and displayed on the HMD so that the character corresponding to the person is placed in the three-dimensional virtual space. This is a second embodiment configured as described above. When a human face is detected in real space by a real-time camera, the area of the detected human face on the surrounding image (for example, the right / left, upper right / lower right / upper left / lower left area, etc. on the image) ) Or at the same time. In the three-dimensional virtual space, the “bear” character corresponding to the person moves from the direction corresponding to the area (for example, left / right, upper right / lower right / upper left / lower left). The character image is superimposed on the virtual space image so as to be reflected and displayed on the HMD. In the example of FIG. 10, since there is a person in the left direction as viewed from the real-time camera as shown in FIG. 9, the “bear” character corresponding to the person moves from left to right (arrow). It is better to display it on the HMD so that it is reflected.
 図11および図12では、このように移動しながら映り込んだ2匹の「くま」のキャラクタをHMDに表示する例であり、HMDユーザは、自身の近辺に2人の人が存在するといった周辺環境の変化を、HMDを装着したまま検知可能となる。なお、図11は「くま」キャラクタの正面図、図12は背面図を示す。 FIG. 11 and FIG. 12 are examples of displaying two “Kuma” characters reflected while moving in this way on the HMD, and the HMD user is in the vicinity of two people in the vicinity of himself / herself. Changes in the environment can be detected with the HMD attached. 11 shows a front view of the “Kuma” character, and FIG. 12 shows a rear view.
 以上、本発明による没入型仮想空間に実空間のユーザの周辺環境を提示する実施形態についていくつかの例示と共に説明してきたが、本発明は上記実施形態に限定されるものではない。特許請求の範囲に記載される本発明の趣旨及び範囲から逸脱することなく、様々な実施形態の変更がなされ得ることを当業者は理解するであろう。 As described above, the embodiment of presenting the surrounding environment of the user in the real space in the immersive virtual space according to the present invention has been described together with some examples, but the present invention is not limited to the above embodiment. Those skilled in the art will appreciate that various modifications of the embodiments can be made without departing from the spirit and scope of the invention as set forth in the claims.
100    HMDシステム
110    HMD
112    ディスプレイ
114    傾きセンサ(ジャイロ・センサ)
120    コンピュータ(制御回路部)
130    ポジション・トラッキング・カメラ
140    リアルタイム・カメラ
210    動き検知部
220    視界決定部
230    視界画像生成部
250    周辺画像取得部
260    環境検知部
270    空間画像重畳(出力)部
280    空間情報格納部
 
 
100 HMD system 110 HMD
112 Display 114 Tilt sensor (gyro sensor)
120 Computer (Control circuit part)
130 Position tracking camera 140 Real-time camera 210 Motion detection unit 220 View determination unit 230 View image generation unit 250 Peripheral image acquisition unit 260 Environment detection unit 270 Spatial image superimposition (output) unit 280 Spatial information storage unit

Claims (7)

  1.  仮想空間情報に基づいて生成される仮想空間画像を表示して、ユーザを仮想空間に没入させる、前記ユーザが装着するヘッドマウント・ディスプレイ(HMD)と、
     前記ユーザの実空間の周辺画像を生成するリアルタイム・カメラと、
     前記HMDおよび前記リアルタイム・カメラに接続されるコンピュータであって、
      前記リアルタイム・カメラから前記周辺画像を取得し、該取得した周辺画像を用いて実空間の周辺環境を検知し、前記周辺環境に関する情報を前記仮想空間情報に関連付けて前記HMDに出力するように構成されるコンピュータと
    を備え、
    前記周辺画像を用いた前記周辺環境の検知が、前記周辺画像に対する人の検知を含み、
    前記周辺環境に関する情報の出力が、前記人の検知による前記実空間の周辺環境の変化に応じて、前記周辺環境に対して検知される人の人数を前記仮想空間内で通知することと、それと同時に、前記リアルタイム・カメラからの実空間の映像を、前記仮想空間に設けた複数の仮想ディスプレイの内の特定の1つに表示させることにより、前記仮想空間画像の一部として前記HMDに表示することとを含むように構成される、HMDシステム。
    A head mounted display (HMD) worn by the user, displaying a virtual space image generated based on the virtual space information, and immersing the user in the virtual space;
    A real-time camera that generates a peripheral image of the user's real space;
    A computer connected to the HMD and the real-time camera,
    The peripheral image is acquired from the real-time camera, the peripheral environment of the real space is detected using the acquired peripheral image, and information related to the peripheral environment is output to the HMD in association with the virtual space information. And a computer
    The detection of the surrounding environment using the peripheral image includes detection of a person with respect to the peripheral image,
    Output of information related to the surrounding environment, in accordance with a change in the surrounding environment of the real space due to the detection of the person, notifying the number of people detected for the surrounding environment in the virtual space; and At the same time, the real space video from the real time camera is displayed on the HMD as a part of the virtual space image by displaying it on a specific one of a plurality of virtual displays provided in the virtual space. An HMD system configured to include:
  2. 前記人の検知が顔認識による検知を含む、請求項1記載のHMDシステム。 The HMD system according to claim 1, wherein the human detection includes detection by face recognition.
  3. 前記周辺環境に関する情報の出力が、前記人に対応するキャラクタを前記仮想空間に配置するように、キャラクタ画像を前記仮想空間画像に重畳することを含む、請求項1または2記載のHMDシステム。 The HMD system according to claim 1, wherein the output of information related to the surrounding environment includes superimposing a character image on the virtual space image so that a character corresponding to the person is placed in the virtual space.
  4. 請求項3記載のHMDシステムであって、
    前記周辺環境の検知が、前記周辺画像上にある前記人の顔のエリアの特定を含み、
     前記周辺環境に関する情報の出力が、前記仮想空間において前記特定されたエリアに対応する方向から、前記キャラクタが移動しながら映り込むように、前記キャラクタ画像を前記仮想空間画像に重畳する、HMDシステム。
    The HMD system according to claim 3,
    Detecting the surrounding environment includes identifying an area of the person's face on the surrounding image;
    An HMD system that superimposes the character image on the virtual space image so that an output of information related to the surrounding environment is reflected while the character moves from a direction corresponding to the specified area in the virtual space.
  5. 没入型の仮想空間に実空間のユーザの周辺環境を提示するコンピュータ・プログラムであって、コンピュータにリアルタイム・カメラおよびヘッドマウント・ディスプレイ(HMD)が接続されており、
     前記リアルタイム・カメラから前記ユーザの実空間の周辺画像を取得する周辺画像取得部と、
    前記取得した周辺画像を用いて実空間の周辺環境を検知する環境検知部と、
     前記検知した周辺環境に関する情報を前記仮想空間情報に関連付けて前記HMDに出力する出力部として前記コンピュータに機能させ、
     前記周辺画像を用いた前記周辺環境の検知が、前記周辺画像に対する人の検知を含み、
     前記周辺環境に関する情報の出力が、前記人の検知による前記実空間の周辺環境の変化に応じて、前記周辺環境に対して検知される人の人数を前記仮想空間内で通知することと、それと同時に、前記リアルタイム・カメラからの実空間の映像を、前記仮想空間に設けた複数の仮想ディスプレイの内の特定の1つに表示させることにより、前記仮想空間画像の一部として前記HMDに表示することとを含むように構成される、コンピュータ・プログラム。
    A computer program for presenting a user's surrounding environment in an immersive virtual space, with a real-time camera and a head-mounted display (HMD) connected to the computer,
    A peripheral image acquisition unit that acquires a peripheral image of the real space of the user from the real-time camera;
    An environment detection unit that detects the surrounding environment of the real space using the acquired surrounding image;
    Causing the computer to function as an output unit that outputs the information about the detected surrounding environment to the HMD in association with the virtual space information;
    The detection of the surrounding environment using the peripheral image includes detection of a person with respect to the peripheral image,
    Output of information related to the surrounding environment, in accordance with a change in the surrounding environment of the real space due to the detection of the person, notifying the number of people detected for the surrounding environment in the virtual space; and At the same time, the real space video from the real time camera is displayed on the HMD as a part of the virtual space image by displaying it on a specific one of a plurality of virtual displays provided in the virtual space. A computer program configured to include:
  6. 前記人の検知が、顔認識による検知を含む、請求項5記載のコンピュータ・プログラム。 The computer program according to claim 5, wherein the detection of the person includes detection by face recognition.
  7.  前記周辺環境に関する情報の出力が、前記人に対応するキャラクタを前記仮想空間に配置するように、キャラクタ画像を前記仮想空間画像に重畳することを含む、請求項5または6記載のコンピュータ・プログラム。
     
    The computer program according to claim 5 or 6, wherein the output of information related to the surrounding environment includes superimposing a character image on the virtual space image so that a character corresponding to the person is placed in the virtual space.
PCT/JP2016/056686 2015-04-08 2016-03-03 Head-mounted display system and computer program for presenting real space surrounding environment of user in immersive virtual space WO2016163183A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015078921A JP5869712B1 (en) 2015-04-08 2015-04-08 Head-mounted display system and computer program for presenting a user's surrounding environment in an immersive virtual space
JP2015-078921 2015-04-08

Publications (1)

Publication Number Publication Date
WO2016163183A1 true WO2016163183A1 (en) 2016-10-13

Family

ID=55360933

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/056686 WO2016163183A1 (en) 2015-04-08 2016-03-03 Head-mounted display system and computer program for presenting real space surrounding environment of user in immersive virtual space

Country Status (2)

Country Link
JP (1) JP5869712B1 (en)
WO (1) WO2016163183A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10685211B2 (en) 2015-08-04 2020-06-16 Sony Interactive Entertainment Inc. Head-mounted display, display control method, and program

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102436464B1 (en) * 2015-06-05 2022-08-26 삼성전자주식회사 Method for outputting notification information and electronic device thereof
JP5996814B1 (en) 2016-02-08 2016-09-21 株式会社コロプラ Method and program for providing image of virtual space to head mounted display
JP6392832B2 (en) * 2016-12-06 2018-09-19 株式会社コロプラ Information processing method, apparatus, and program for causing computer to execute information processing method
JP6812803B2 (en) * 2017-01-12 2021-01-13 ソニー株式会社 Information processing equipment, information processing methods, and programs
JP6244593B1 (en) * 2017-01-30 2017-12-13 株式会社コロプラ Information processing method, apparatus, and program for causing computer to execute information processing method
JP6917340B2 (en) * 2018-05-17 2021-08-11 グリー株式会社 Data processing programs, data processing methods, and data processing equipment
JP6979539B2 (en) * 2018-11-06 2021-12-15 株式会社ソニー・インタラクティブエンタテインメント Information processing system, display method and computer program
CN114041101A (en) * 2019-07-11 2022-02-11 惠普发展公司,有限责任合伙企业 Eye tracking for displays

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010145436A (en) * 2008-12-16 2010-07-01 Brother Ind Ltd Head-mounted display
JP2012114755A (en) * 2010-11-25 2012-06-14 Brother Ind Ltd Head-mounted display and computer program
JP2013257716A (en) * 2012-06-12 2013-12-26 Sony Computer Entertainment Inc Obstacle avoiding device and obstacle avoidance method
JP2015191124A (en) * 2014-03-28 2015-11-02 ソフトバンクBb株式会社 Non-transmission type head-mounted display and program

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009145883A (en) * 2007-11-20 2009-07-02 Rissho Univ Learning system, storage medium, and learning method
US20160054565A1 (en) * 2013-03-29 2016-02-25 Sony Corporation Information processing device, presentation state control method, and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010145436A (en) * 2008-12-16 2010-07-01 Brother Ind Ltd Head-mounted display
JP2012114755A (en) * 2010-11-25 2012-06-14 Brother Ind Ltd Head-mounted display and computer program
JP2013257716A (en) * 2012-06-12 2013-12-26 Sony Computer Entertainment Inc Obstacle avoiding device and obstacle avoidance method
JP2015191124A (en) * 2014-03-28 2015-11-02 ソフトバンクBb株式会社 Non-transmission type head-mounted display and program

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10685211B2 (en) 2015-08-04 2020-06-16 Sony Interactive Entertainment Inc. Head-mounted display, display control method, and program
US11417126B2 (en) 2015-08-04 2022-08-16 Sony Interactive Entertainment Inc. Head-mounted display, display control method, and program
US11763578B2 (en) 2015-08-04 2023-09-19 Sony Interactive Entertainment Inc. Head-mounted display, display control method, and program

Also Published As

Publication number Publication date
JP2016198180A (en) 2016-12-01
JP5869712B1 (en) 2016-02-24

Similar Documents

Publication Publication Date Title
JP5869712B1 (en) Head-mounted display system and computer program for presenting a user's surrounding environment in an immersive virtual space
CN110413105B (en) Tangible visualization of virtual objects within a virtual environment
KR102502404B1 (en) Information processing device and method, and program
JP5996814B1 (en) Method and program for providing image of virtual space to head mounted display
WO2017047367A1 (en) Computer program for line-of-sight guidance
JP2013258614A (en) Image generation device and image generation method
WO2015068656A1 (en) Image-generating device and method
US11184597B2 (en) Information processing device, image generation method, and head-mounted display
JP6899875B2 (en) Information processing device, video display system, information processing device control method, and program
US20180246331A1 (en) Helmet-mounted display, visual field calibration method thereof, and mixed reality display system
JP6399692B2 (en) Head mounted display, image display method and program
US11195320B2 (en) Feed-forward collision avoidance for artificial reality environments
JP2017093946A (en) Image display method and program
JP2017138973A (en) Method and program for providing virtual space
JP2024050696A (en) Information processing apparatus, user guide presentation method, and head mounted display
JP2021135776A (en) Information processor, information processing method, and program
JP5952931B1 (en) Computer program
JP2017046233A (en) Display device, information processor, and control method of the same
US11474595B2 (en) Display device and display device control method
US11212501B2 (en) Portable device and operation method for tracking user's viewpoint and adjusting viewport
US20150379775A1 (en) Method for operating a display device and system with a display device
JP2016181267A (en) Computer program
US20210314557A1 (en) Information processing apparatus, information processing method, and program
US20230290081A1 (en) Virtual reality sharing method and system
JP6738308B2 (en) Information processing method, program, virtual space distribution system and device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16776352

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16776352

Country of ref document: EP

Kind code of ref document: A1