WO2018124280A1 - Simulation system, image processing method, and information storage medium - Google Patents
Simulation system, image processing method, and information storage medium Download PDFInfo
- Publication number
- WO2018124280A1 WO2018124280A1 PCT/JP2017/047250 JP2017047250W WO2018124280A1 WO 2018124280 A1 WO2018124280 A1 WO 2018124280A1 JP 2017047250 W JP2017047250 W JP 2017047250W WO 2018124280 A1 WO2018124280 A1 WO 2018124280A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- virtual space
- user
- image
- virtual
- position information
- Prior art date
Links
Images
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63F—CARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
- A63F13/00—Video games, i.e. games using an electronically generated display having two or more dimensions
- A63F13/50—Controlling the output signals based on the game progress
- A63F13/52—Controlling the output signals based on the game progress involving aspects of the displayed game scene
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G5/00—Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
Definitions
- the present invention relates to a simulation system, an image processing method, an information storage medium, and the like.
- a simulation system that generates an image that can be seen from a virtual camera in a virtual space.
- a simulation system that realizes virtual reality (VR) by displaying an image viewed from a virtual camera on an HMD (head-mounted display device)
- VR virtual reality
- Patent Document 1 Japanese Patent Document 1 or the like.
- a simulation system an image processing method, an information storage medium, and the like that can realize a virtual reality that can move in a plurality of virtual spaces.
- One aspect of the present invention includes a virtual space setting unit that performs a setting process of a virtual space in which an object is arranged, a moving object processing unit that performs a process of moving a user moving object corresponding to a user in the virtual space, A display processing unit that performs drawing processing of an image viewed from a virtual camera in the virtual space, wherein the virtual space setting unit is specific to the first virtual space and the first virtual space as the virtual space A second virtual space that is connected via a point, and the display processing unit displays the position information of the user moving object or the virtual camera before the given switching condition is established. Corresponding to the singular point by performing a process of drawing the image of the second virtual space in addition to the image of the first virtual space as the first drawing process.
- the present invention relates to a simulation system that generates an image in which an image of the first virtual space is displayed in a region corresponding to the singular point when the line-of-sight direction is directed in the direction opposite to the first traveling direction. .
- the present invention also relates to a program that causes a computer to function as each of the above-described units, or a computer-readable information storage medium that stores the program.
- an image that can be seen from a virtual camera is generated in a virtual space in which a user moving object moves.
- a first virtual space and a second virtual space are set as the virtual space.
- the position information of the user moving body or the virtual camera is set as the position information of the first virtual space, and the image of the second virtual space is added to the image of the first virtual space.
- An image is generated in which the image of the second virtual space is displayed in the region corresponding to the singular point.
- the position information of the user moving body or the virtual camera is set as the position information of the second virtual space, and the image of the first virtual space is drawn in addition to the image of the second virtual space.
- an image in which the image of the first virtual space is displayed in the region corresponding to the singular point is generated.
- an image in which an image of the first virtual space is displayed in a region corresponding to a singular point when the line-of-sight direction of the virtual camera faces the opposite direction side of the first traveling direction. Will be generated.
- the user moving body or the virtual camera passes through the region corresponding to the singular point, so that the position of the user moving body or the virtual camera is changed from the position of the first virtual space to the position of the second virtual space. It will be switched.
- the image of the second virtual space is drawn in addition to the image of the first virtual space, and the second virtual space is drawn in the region corresponding to the singular point.
- the image of the virtual space 2 is displayed.
- the image of the first virtual space is drawn in addition to the image of the second virtual space, and the region corresponding to the singular point is drawn.
- An image of the first virtual space is displayed. This makes it possible to provide a simulation system or the like that can realize a virtual reality that can move in a plurality of virtual spaces.
- the display processing unit may be configured such that when the user moving body or the virtual camera passes through the singular point in a second traveling direction different from the first traveling direction, The position information of the user moving body or the virtual camera is set as the position information of the first virtual space, and as the first drawing process, a process of drawing at least an image of the first virtual space is performed. Also good.
- the user moving body or the virtual camera passes through the region corresponding to the singular point in the second traveling direction different from the first traveling direction, so that the position of the user moving body or the virtual camera is changed.
- the position of the second virtual space is switched to the position of the first virtual space.
- the display processing unit when the switching condition is satisfied by the user moving body or the virtual camera passing through a location corresponding to the singular point, the user moving body or the The position information of the virtual camera may be set as the position information of the third virtual space, and a process of drawing at least an image of the third virtual space may be performed as the third drawing process.
- the display processing unit may correspond to the singular points, when the plurality of users play, the plurality of user moving bodies corresponding to the plurality of users or the plurality of virtual cameras.
- the third drawing process may be permitted on condition that the vehicle passes through a place and returns to the first virtual space.
- the display processing unit may determine whether or not the switching condition is satisfied based on the input information of the user or the detection information of the sensor.
- an object corresponding to the singular point is arranged in a play field in the real space in which the user moves, and the display processing unit is configured to allow the user to move the object in the real space.
- the display processing unit is configured to allow the user to move the object in the real space.
- the display processing unit may perform processing for setting or not setting the singular point.
- the present invention includes a sensation device control unit that controls a sensation device for allowing the user to experience virtual reality (a computer functions as the sensation device control unit), and the sensation device control unit includes the switching
- the control of the sensation apparatus when the condition is not satisfied may be different from the control of the sensation apparatus when the switching condition is satisfied.
- a notification processing unit that performs a process of outputting notification information of collision between users in real space may be included (a computer is caused to function as the notification processing unit). May be)
- an information acquisition unit that acquires position information of the user in real space is included (a computer is caused to function as the information acquisition unit), and the moving body processing unit includes the acquired position information.
- the display processing unit generates a display image of a head-mounted display device worn by the user, and the display processing unit satisfies the switching condition.
- the position information of the user moving body or the virtual camera specified by the position information of the user in the real space is set as the position information of the first virtual space, and the switching condition is If established, the position information of the user moving object or the virtual camera specified by the position information of the user in the real space is used as the position information of the second virtual space. It may be set Te.
- the position information of the user in the real space is acquired, and the user moving body or the like is moved in the virtual space based on the acquired position information.
- the position of the user moving body or the virtual camera is switched from the position of the first virtual space to the position of the second virtual space, further improving the virtual reality that moves in a plurality of virtual spaces. It becomes possible to do.
- One embodiment of the present invention is based on an information acquisition unit that acquires position information of a user in real space, a virtual space setting unit that performs setting processing of a virtual space in which an object is arranged, and the acquired position information.
- a moving body processing unit that performs a process of moving a user moving body corresponding to the user in the virtual space, and a head-mounted type that performs a drawing process of an image viewed from a virtual camera in the virtual space and is worn by the user
- a display processing unit configured to generate an image to be displayed on a display device, wherein the virtual space setting unit includes, as the virtual space, a first virtual space and a singular point with respect to the first virtual space.
- the position information of the moving body or the virtual camera is set as the position information of the first virtual space, and as the first drawing process, at least an image of the first virtual space is drawn, and the user When the moving condition or the virtual camera passes through the place corresponding to the singular point in the first traveling direction and the switching condition is satisfied, it is specified by the position information of the user in the real space.
- Simulation sets the position information of the user moving body or the virtual camera as position information of the second virtual space and performs a process of drawing at least an image of the second virtual space as a second drawing process.
- the present invention also relates to a program that causes a computer to function as each of the above-described units, or a computer-readable information storage medium that stores the program.
- user position information in real space is acquired, and a user moving body or the like is moved in the virtual space based on the acquired position information.
- the position of the user moving body or the virtual camera is switched from the position of the first virtual space to the position of the second virtual space.
- a virtual space setting process for setting a virtual space in which an object is arranged, a moving body process for moving a user moving body corresponding to a user in the virtual space, and a virtual space in the virtual space.
- Display processing for rendering an image visible from the camera, and in the virtual space setting processing, the virtual space is connected to the first virtual space and the first virtual space via a singular point
- the position information of the user moving body or the virtual camera is used as the position information of the first virtual space before a given switching condition is satisfied in the display process.
- the line-of-sight direction of the virtual camera is The present invention relates to an image processing method for generating an image in which an image of the first virtual space is displayed in a region corresponding to the singular point when facing in the direction opposite to the first traveling direction.
- Another aspect of the present invention is based on information acquisition processing for acquiring user location information in real space, virtual space setting processing for setting a virtual space in which an object is placed, and the acquired location information.
- a moving body process for moving a user moving body corresponding to the user and a drawing process for an image visible from the virtual camera in the virtual space are performed and displayed on the head-mounted display device worn by the user.
- the user moving body or the user specified by the position information of the user in the real space is set.
- the position information of the virtual camera is set as the position information of the first virtual space, and as a first drawing process, a process of drawing at least an image of the first virtual space is performed, and the user moving object or The user movement specified by the position information of the user in the real space when the switching condition is satisfied when the virtual camera passes through a place corresponding to the singular point in a first traveling direction.
- FIG. 2A and FIG. 2B are examples of the HMD used in this embodiment.
- 3A and 3B are other examples of the HMD used in the present embodiment.
- FIG. 6A to FIG. 6E are explanatory diagrams of an example of a virtual space switching method.
- Explanatory drawing of 2nd virtual space The example of an image at the time of seeing a door from the 2nd virtual space side.
- FIG. 10A and FIG. 10B are explanatory diagrams of a control example of the sensation apparatus in the second virtual space.
- FIGS. 15A and 15B are explanatory diagrams of a switching method to the third virtual space. Explanatory drawing of the switching method to the 3rd virtual space.
- FIG. 17A and FIG. 17B are explanatory diagrams of the technique of this embodiment when a plurality of users play.
- FIG. 18A and FIG. 18B are explanatory diagrams of the technique of this embodiment when a plurality of users play.
- FIG. 19A and 19B are explanatory diagrams of a switching method based on user input information and sensor detection information.
- FIG. 20A and FIG. 20B are explanatory diagrams of a method for switching the virtual space by arranging an object corresponding to a singular point.
- FIG. 21A and FIG. 21B are explanatory diagrams of a method for setting or not setting a singular point.
- 22A and 22B are explanatory diagrams of a control method for the sensation apparatus.
- FIGS. 23A and 23B are explanatory diagrams of a method for outputting collision notification information. Explanatory drawing of the output method of the alerting
- FIG. 25A and FIG. 25B are explanatory diagrams of a special image display example in the present embodiment.
- FIG. 26A and FIG. 26B are explanatory diagrams of a method for acquiring position information of a user wearing an HMD. The flowchart which shows the detailed process example of this embodiment.
- FIG. 1 is a block diagram illustrating a configuration example of a simulation system (a simulator, a game system, and an image generation system) according to the present embodiment.
- the simulation system of this embodiment is a system that simulates virtual reality (VR), for example, a game system that provides game content, a real-time simulation system such as a sports competition simulator and a driving simulator, a system that provides SNS services, and video
- VR virtual reality
- the present invention can be applied to various systems such as a content providing system that provides content such as an operating system that implements remote work.
- the simulation system of the present embodiment is not limited to the configuration shown in FIG. 1, and various modifications such as omitting some of the components (each unit) or adding other components are possible.
- the operation unit 160 is for a user (player) to input various operation information (input information).
- the operation unit 160 can be realized by various operation devices such as an operation button, a direction instruction key, a joystick, a handle, a pedal, a lever, or a voice input device.
- the storage unit 170 stores various types of information.
- the storage unit 170 functions as a work area such as the processing unit 100 or the communication unit 196.
- the game program and game data necessary for executing the game program are held in the storage unit 170.
- the function of the storage unit 170 can be realized by a semiconductor memory (DRAM, VRAM), HDD (Hard Disk Drive), SSD, optical disk device, or the like.
- the storage unit 170 includes an object information storage unit 172 and a drawing buffer 178.
- the information storage medium 180 (a computer-readable medium) stores programs, data, and the like, and its function can be realized by an optical disk (DVD, BD, CD), HDD, semiconductor memory (ROM), or the like.
- the processing unit 100 performs various processes of the present embodiment based on a program (data) stored in the information storage medium 180. That is, in the information storage medium 180, a program for causing a computer (an apparatus including an input device, a processing unit, a storage unit, and an output unit) to function as each unit of the present embodiment (a program for causing the computer to execute processing of each unit). Is memorized.
- the HMD 200 (head-mounted display device) is a device that is mounted on the user's head and displays an image in front of the user's eyes.
- the HMD 200 is preferably a non-transmissive type, but may be a transmissive type.
- the HMD 200 may be a so-called glasses-type HMD.
- the HMD 200 includes a sensor unit 210, a display unit 220, and a processing unit 240. A modification in which a light emitting element is provided in the HMD 200 is also possible.
- the sensor unit 210 is for realizing tracking processing such as head tracking, for example.
- the position and direction of the HMD 200 are specified by tracking processing using the sensor unit 210.
- the user's viewpoint position and line-of-sight direction can be specified.
- the first tracking method which is an example of the tracking method
- a plurality of light receiving elements are provided as the sensor unit 210, as will be described in detail with reference to FIGS. 2A and 2B described later.
- the position of the HMD 200 (user's head) in the real world three-dimensional space, Identify the direction.
- a plurality of light emitting elements LEDs are provided in the HMD 200 as will be described in detail with reference to FIGS. 3A and 3B described later.
- a motion sensor is provided as the sensor unit 210, and the position and direction of the HMD 200 are specified using this motion sensor.
- the motion sensor can be realized by, for example, an acceleration sensor or a gyro sensor.
- the position and direction of the HMD 200 in a three-dimensional space in the real world can be specified.
- the position and direction of the HMD 200 may be specified by a combination of the first tracking method and the second tracking method, or a combination of the first tracking method and the third tracking method.
- tracking processing that directly specifies the user's viewpoint position and line-of-sight direction may be employed.
- the display unit 220 of the HMD 200 can be realized by, for example, an organic EL display (OEL) or a liquid crystal display (LCD).
- the display unit 220 of the HMD 200 includes a first display or first display area set in front of the user's left eye and a second display or second display area set in front of the right eye. It is provided and stereoscopic display is possible.
- stereoscopic display for example, a left-eye image and a right-eye image with different parallax are generated, a left-eye image is displayed on the first display, and a right-eye image is displayed on the second display. To do.
- the left-eye image is displayed in the first display area of one display, and the right-eye image is displayed in the second display area.
- the HMD 200 is provided with two eyepiece lenses (fisheye lenses) for the left eye and the right eye, thereby expressing a VR space that extends over the entire perimeter of the user's field of view. Then, correction processing for correcting distortion generated in an optical system such as an eyepiece is performed on the left-eye image and the right-eye image. This correction process is performed by the display processing unit 120.
- the processing unit 240 of the HMD 200 performs various processes necessary for the HMD 200. For example, the processing unit 240 performs control processing of the sensor unit 210, display control processing of the display unit 220, and the like. Further, the processing unit 240 may perform a three-dimensional sound (stereoscopic sound) process to realize reproduction of a three-dimensional sound direction, distance, and spread.
- a three-dimensional sound stereographic sound
- the display part of a simulation system may be a display part of types other than HMD.
- the display unit of the simulation system may be, for example, a display (ordinary 2D monitor or dome screen) in an arcade game device, a television in a home game device, or a display in a personal computer (PC).
- the sound output unit 192 outputs the sound generated by the present embodiment, and can be realized by, for example, a speaker or headphones.
- the I / F (interface) unit 194 performs interface processing with the portable information storage medium 195, and its function can be realized by an ASIC for I / F processing or the like.
- the portable information storage medium 195 is for a user to save various types of information, and is a storage device that retains storage of such information even when power is not supplied.
- the portable information storage medium 195 can be realized by an IC card (memory card), a USB memory, a magnetic card, or the like.
- the communication unit 196 communicates with the outside (another apparatus) via a wired or wireless network, and functions thereof are hardware such as a communication ASIC or communication processor, or communication firmware. Can be realized.
- a program (data) for causing a computer to function as each unit of this embodiment is distributed from the information storage medium of the server (host device) to the information storage medium 180 (or storage unit 170) via the network and communication unit 196. May be. Use of an information storage medium by such a server (host device) can also be included in the scope of the present invention.
- the processing unit 100 is used for operation information from the operation unit 160, tracking information in the HMD 200 (information on at least one of the position and direction of the HMD, information on at least one of the viewpoint position and the line-of-sight direction), a program, and the like. Based on this, game processing (simulation processing), virtual space setting processing, moving body processing, virtual camera control processing, display processing, sound processing, or the like is performed.
- each process (each function) of this embodiment performed by each unit of the processing unit 100 can be realized by a processor (a processor including hardware).
- each process of the present embodiment can be realized by a processor that operates based on information such as a program and a memory that stores information such as a program.
- the function of each unit may be realized by individual hardware, or the function of each unit may be realized by integrated hardware.
- the processor may include hardware, and the hardware may include at least one of a circuit that processes a digital signal and a circuit that processes an analog signal.
- the processor can be configured by one or a plurality of circuit devices (for example, ICs) mounted on a circuit board or one or a plurality of circuit elements (for example, resistors, capacitors, etc.).
- the processor may be, for example, a CPU (Central Processing Unit). However, the processor is not limited to the CPU, and various processors such as a GPU (GraphicsGProcessing Unit) or a DSP (Digital Signal Processor) can be used.
- the processor may be an ASIC hardware circuit.
- the processor may include an amplifier circuit, a filter circuit, and the like that process an analog signal.
- the memory storage unit 170
- the memory stores instructions that can be read by a computer, and the processing (function) of each unit of the processing unit 100 is realized by executing the instructions by the processor.
- the instruction here may be an instruction set constituting a program, or an instruction for instructing an operation to the hardware circuit of the processor.
- the processing unit 100 includes an input processing unit 102, an arithmetic processing unit 110, and an output processing unit 140.
- the arithmetic processing unit 110 includes an information acquisition unit 111, a virtual space setting unit 112, a moving body processing unit 113, a virtual camera control unit 114, a game processing unit 115, a notification processing unit 116, a sensation device control unit 117, a display processing unit 120, A sound processing unit 130 is included.
- each process of the present embodiment executed by these units can be realized by a processor (or a processor and a memory). Various modifications such as omitting some of these components (each unit) or adding other components are possible.
- the input processing unit 102 performs processing for receiving operation information and tracking information, processing for reading information from the storage unit 170, and processing for receiving information via the communication unit 196 as input processing.
- the input processing unit 102 stores, in the storage unit 170, processing for acquiring operation information input by the user using the operation unit 160, tracking information detected by the sensor unit 210 of the HMD 200, and the information specified by the read command.
- a process for reading data from a computer or a process for receiving information from an external device (such as a server) via a network is performed as an input process.
- the reception process includes a process of instructing the communication unit 196 to receive information, a process of acquiring information received by the communication unit 196, and writing the information in the storage unit 170, and the like.
- the arithmetic processing unit 110 performs various arithmetic processes. For example, arithmetic processing such as information acquisition processing, virtual space setting processing, moving body processing, virtual camera control processing, game processing (simulation processing), display processing, or sound processing is performed.
- arithmetic processing such as information acquisition processing, virtual space setting processing, moving body processing, virtual camera control processing, game processing (simulation processing), display processing, or sound processing is performed.
- the information acquisition unit 111 (program module for information acquisition processing) performs various information acquisition processing. For example, the information acquisition unit 111 acquires position information of the user wearing the HMD 200 and the like. The information acquisition unit 111 may acquire user direction information and the like.
- the virtual space setting unit 112 (program module for virtual space setting processing) performs setting processing for a virtual space (object space) in which objects are arranged.
- objects representing display objects such as moving objects (people, robots, cars, trains, airplanes, ships, monsters, animals, etc.), maps (terrain), buildings, auditoriums, courses (roads), trees, walls, water surfaces, etc.
- a process of placing and setting an object is performed.
- the position and rotation angle of the object in the world coordinate system are determined, and the rotation angle (rotation angle around the X, Y, and Z axes) is determined at that position (X, Y, Z).
- Arrange objects Specifically, in the object information storage unit 172 of the storage unit 170, object information that is information such as the position, rotation angle, moving speed, and moving direction of an object (part object) in the virtual space is associated with the object number. Is memorized.
- the virtual space setting unit 112 performs a process of updating the object information for each frame, for example.
- the moving body processing unit 113 performs various processes on the moving body moving in the virtual space. For example, processing for moving the moving body in virtual space (object space, game space) and processing for moving the moving body are performed.
- the mobile object processing unit 113 is based on operation information input by the user through the operation unit 160, acquired tracking information, a program (movement / motion algorithm), various data (motion data), and the like.
- Control processing for moving the model object) in the virtual space or moving the moving object (motion, animation) is performed. Specifically, a simulation for sequentially obtaining movement information (position, rotation angle, speed, or acceleration) and motion information (part object position or rotation angle) of a moving body for each frame (for example, 1/60 second). Process.
- a frame is a unit of time for performing a moving / movement process (simulation process) and an image generation process of a moving object.
- the moving body is, for example, a user moving body corresponding to a user (player) in real space.
- the user moving body is a virtual user (virtual player, avatar) in the virtual space, or a boarding moving body (operation moving body) on which the virtual user is boarded (operated).
- the virtual camera control unit 114 controls the virtual camera. For example, a process for controlling the virtual camera is performed based on user operation information, tracking information, and the like input by the operation unit 160.
- the virtual camera control unit 114 controls the virtual camera set as the first-person viewpoint or the third-person viewpoint of the user. For example, by setting a virtual camera at a position corresponding to the viewpoint (first-person viewpoint) of a user moving body moving in the virtual space, and setting the viewpoint position and line-of-sight direction of the virtual camera, the position (positional coordinates) of the virtual camera And control the attitude (rotation angle around the rotation axis).
- the position and orientation of the virtual camera are controlled by setting the virtual camera at the position of the viewpoint (third-person viewpoint) following the user moving body and setting the viewpoint position and line-of-sight direction of the virtual camera.
- the virtual camera control unit 114 controls the virtual camera so as to follow the change of the user's viewpoint based on the tracking information of the user's viewpoint information acquired by the viewpoint tracking.
- tracking information viewpoint tracking information
- This tracking information can be acquired by performing tracking processing of the HMD 200, for example.
- the virtual camera control unit 114 changes the viewpoint position and the line-of-sight direction of the virtual camera based on the acquired tracking information (information on at least one of the user's viewpoint position and the line-of-sight direction).
- the virtual camera control unit 114 is configured so that the viewpoint position and the line-of-sight direction (position and posture) of the virtual camera change in the virtual space according to changes in the viewpoint position and the line-of-sight direction of the user in the real space. Set up the camera. By doing in this way, a virtual camera can be controlled to follow a user's viewpoint change based on tracking information of a user's viewpoint information.
- the game processor 115 performs various game processes for the user to play the game.
- the game processing unit 115 simulate processing unit executes various simulation processes for the user to experience virtual reality (virtual reality).
- the game process is, for example, a process for starting a game when a game start condition is satisfied, a process for advancing the started game, a process for ending a game when a game end condition is satisfied, or calculating a game result. Processing.
- the notification processing unit 116 performs various types of notification processing. For example, a warning notification process for the user is performed.
- the notification process may be a notification process using an image or sound, for example, or may be a notification process using a sensation device such as a vibration device, sound, or an air cannon.
- the sensation device control unit 117 performs various control processes of the sensation device. For example, the sensation device is controlled to allow the user to experience virtual reality.
- the display processing unit 120 performs display processing of a game image (simulation image). For example, a drawing process is performed based on the results of various processes (game process, simulation process) performed by the processing unit 100, thereby generating an image and displaying it on the display unit 220. Specifically, geometric processing such as coordinate transformation (world coordinate transformation, camera coordinate transformation), clipping processing, perspective transformation, or light source processing is performed. Based on the processing result, drawing data (the position of the vertex of the primitive surface) Coordinates, texture coordinates, color data, normal vector, ⁇ value, etc.) are created.
- the object (one or a plurality of primitive surfaces) after perspective transformation (after geometry processing) is converted into image information in units of pixels such as a drawing buffer 178 (frame buffer, work buffer, etc.).
- an image that can be seen from the virtual camera given viewpoints, the first and second viewpoints for the left eye and the right eye
- the drawing processing performed by the display processing unit 120 can be realized by vertex shader processing, pixel shader processing, or the like.
- the sound processing unit 130 performs sound processing based on the results of various processes performed by the processing unit 100. Specifically, game sounds such as music (music, BGM), sound effects, or sounds are generated, and the game sounds are output to the sound output unit 192. Note that part of the sound processing of the sound processing unit 130 (for example, three-dimensional sound processing) may be realized by the processing unit 240 of the HMD 200.
- the output processing unit 140 performs various types of information output processing. For example, the output processing unit 140 performs processing for writing information in the storage unit 170 and processing for transmitting information via the communication unit 196 as output processing. For example, the output processing unit 140 performs a process of writing information specified by a write command in the storage unit 170 or a process of transmitting information to an external apparatus (server or the like) via a network.
- the transmission process is a process of instructing the communication unit 196 to transmit information, or instructing the communication unit 196 to transmit information.
- the simulation system of this embodiment contains the virtual space setting part 112, the mobile body process part 113, and the display process part 120, as shown in FIG.
- the virtual space setting unit 112 performs processing for setting a virtual space in which objects are arranged. For example, a process is performed in which an object of a user moving object corresponding to the user, an object of an opponent moving object such as an enemy, and an object constituting a map or background are set in the virtual space.
- the user moving body corresponding to the user is, for example, a moving body that the user operates with the operation unit 160 or a moving body that moves in the virtual space following the movement of the user in the real space. This user moving body is called, for example, a character or an avatar.
- the user mobile body may be a boarding mobile body such as a robot on which the user is boarding. Further, the user moving body may be a display object on which the image is displayed, or may be a virtual object on which the image is not displayed.
- the moving body processing unit 113 performs processing for moving a user moving body (virtual camera) corresponding to the user in the virtual space.
- the user moving body is moved in the virtual space based on the operation information input by the user through the operation unit 160.
- the mobile body process part 113 is a user mobile body in virtual space based on the acquired positional information (viewpoint tracking information).
- Process to move For example, the user moving body is moved in the virtual space so as to move following the movement of the user in the real space. For example, based on the moving speed and moving acceleration of the user moving body, a process of updating the position of the user moving body for each frame is performed to move the user moving body in the virtual space (virtual field).
- the display processing unit 120 performs drawing processing of an image (object) in the virtual space. For example, a drawing process of an image viewed from a virtual camera (a given viewpoint) in the virtual space is performed. For example, a drawing process of an image seen from a virtual camera set as a viewpoint (first person viewpoint) of a user moving body such as a character (avatar) is performed. Alternatively, a drawing process of an image seen from a virtual camera set to a viewpoint (third person viewpoint) that follows the user moving body is performed.
- the generated image is desirably a stereoscopic image such as a left-eye image or a right-eye image.
- the virtual space setting unit 112 sets a plurality of first to Mth virtual spaces (M is an integer of 2 or more) as the virtual space. Specifically, the virtual space setting unit 112 sets a first virtual space and a second virtual space as virtual spaces. For example, an arrangement setting process for objects constituting the first virtual space and an arrangement setting process for objects constituting the second virtual space are performed.
- an arrangement setting process for objects constituting the first virtual space and an arrangement setting process for objects constituting the second virtual space are performed.
- the first virtual space is a room space as described later
- an object placement setting process corresponding to an object placed in the room is performed.
- the second virtual space is an ice country space as will be described later, an object placement setting process corresponding to a glacier or sea of the ice country is performed.
- the virtual space setting unit 112 sets, for example, a third virtual space.
- the third virtual space is a space above a train, which will be described later, an object placement setting process corresponding to a train, tunnel, background, or the like is performed.
- the second virtual space is a virtual space that is linked to the first virtual space via a singular point (in other words, a passing point; hereinafter the same).
- the third virtual space may be a virtual space that is linked to the first virtual space via a singular point, or may be a virtual space that is linked to the second virtual space via a singular point. There may be.
- the display processing unit 120 detects the position information (position coordinates) of the user moving object or the virtual camera. , Direction, etc.) is set as position information (position coordinates, direction, etc.) of the first virtual space.
- the position information of the user moving body or the virtual camera is associated as the position information of the first virtual space.
- the display processing unit 120 performs a process of drawing at least an image of the first virtual space as the first drawing process.
- the display processing unit 120 performs a process of drawing an image of the second virtual space in addition to the image of the first virtual space, and includes an area corresponding to the singular point (including the singular point).
- An image in which the image of the second virtual space is displayed in a region to be generated) is generated.
- the drawing process of the image in the first virtual space is realized, for example, by drawing an object configuring the first virtual space and drawing an image that can be seen from the virtual camera in the first virtual space.
- the object constituting the first virtual space is changed.
- a region (display region) corresponding to a singular point is rendered by rendering an object constituting the second virtual space and rendering an image that can be viewed from the virtual camera in the first virtual space. .
- the display processing unit 120 sets the position information of the user moving body or the virtual camera as the position information of the second virtual space. For example, when the switching condition is satisfied when the user moving body or the virtual camera passes through the place corresponding to the singular point in the first traveling direction, the position information of the user moving body or the virtual camera is stored in the second virtual space. Set as location information. For example, the position information of the user moving body or the virtual camera is associated as the position information of the second virtual space instead of the first virtual space. And the display process part 120 performs the process which draws the image of a 2nd virtual space at least as a 2nd drawing process.
- the display processing unit 120 performs a process of drawing an image of the first virtual space in addition to the image of the second virtual space, and the first virtual space is displayed in the region corresponding to the singular point.
- An image in which an image of the space is displayed is generated.
- the drawing process of the image in the second virtual space is realized, for example, by drawing an object constituting the second virtual space and drawing an image that can be seen from the virtual camera in the second virtual space.
- the object constituting the second virtual space is changed.
- the region (display region) corresponding to the singular point is drawn and is realized by drawing an object constituting the first virtual space and drawing an image seen from the virtual camera in the second virtual space.
- an area (object or area) corresponding to a singular point is an area of a door (gate) described later.
- first drawing process for drawing the image of the second virtual space in addition to the image of the first virtual space the second virtual space on the other side of the door with respect to the door region.
- Drawing processing is performed so that an image is displayed.
- second drawing process for drawing the image of the first virtual space in addition to the image of the second virtual space the image of the first virtual space on the other side of the door with respect to the door area.
- the drawing process is performed so that is displayed.
- the display processing unit 120 generates an image in which an image of the first virtual space is displayed in a region corresponding to the singular point when the line-of-sight direction of the virtual camera faces the opposite direction side of the first traveling direction. To do. For example, after the user moving body or virtual camera passes through a place corresponding to a singular point, when the visual line direction of the virtual camera turns to the opposite direction side of the traveling direction, as an image that can be seen in the visual line direction of the virtual camera, An image in which an image of the first virtual space is displayed in a region corresponding to the singular point is generated.
- the second virtual space is changed for the region (main region) other than the region corresponding to the singular point.
- the region corresponding to the singular point an image that draws the object constituting the first virtual space is generated.
- the second drawing process A process of drawing an image is performed, and a process of drawing an image of the first virtual space is not performed.
- the image of the second virtual space is drawn also in the region corresponding to the singular point.
- the direction opposite to the first direction of travel does not have to be the direction opposite to the first direction of travel, and corresponds to, for example, a negative direction when the first direction of travel is a positive direction. .
- a singular point is a point at which the standard cannot be applied under a certain standard.
- the rule that is a criterion for moving only in the first virtual space is applied to the user moving object or the virtual camera. Is done. That is, the user moving body or the virtual camera moves under the law of moving in the first virtual space.
- the singular point of the present embodiment is a point at which such a rule, which is the reference, is not applied and the rule is not followed.
- the law (reference) of moving in the first virtual space is not applied to the user moving body or the virtual camera.
- the virtual camera moves in a second virtual space different from the first virtual space.
- the singular point can be said to be a switching point for associating the position information of the user moving body or the virtual camera with the position information of the first virtual space or the position information of the second virtual space.
- the first virtual space and the second virtual space are linked via a singular point so that the association with the position information of the user moving body or the virtual camera is switched by the establishment of the switching condition.
- information for connecting the first virtual space and the second virtual space via a singular point is stored in the storage unit 170.
- the switching condition of the present embodiment is a condition for switching the correspondence between the position information of such a user moving body or virtual camera and the position information of the first and second virtual spaces by a singular point.
- the location corresponding to the singular point need not be a point, for example, and may be a surface or a region. For example, it may be a surface or region including a singular point.
- the display processing unit 120 displays the position information of the user moving body or the virtual camera as the first information. It is set as position information of one virtual space. Then, the display processing unit 120 performs a process of drawing at least an image of the first virtual space as the first drawing process. For example, as the first drawing process, a process of drawing an image of the first virtual space or drawing an image of the second virtual space in addition to the image of the first virtual space is performed. For example, when a user moving body or virtual camera that has moved from the first virtual space to the second virtual space via a singular point returns to the first virtual space, the position information of the user moving body or virtual camera is obtained.
- the first drawing process in the first virtual space is performed in association with the position information of the first virtual space.
- a user moving body etc. can come and go between the 1st virtual space and the 2nd virtual space via a singular point.
- the user moving object or the virtual camera to which the rule (standard) of moving only in the second virtual space is applied passes the place corresponding to the singular point in the second advancing direction, so that the rule Can no longer be applied and can be moved in the first virtual space.
- the second traveling direction is different from the first traveling direction.
- the first traveling direction is a positive direction based on the singular point (location of the singular point)
- the second traveling direction is a negative direction based on the singular point.
- the second traveling direction is opposite to the first traveling direction.
- the display processing unit 120 displays the user moving body when the switching condition (second switching condition) is satisfied when the user moving body or the virtual camera passes through a place corresponding to the singular point (second singular point).
- the position information of the virtual camera is set as the position information of the third virtual space.
- the position information of the user moving body is associated as the position information of the third virtual space.
- the movement to the third virtual space may be a movement from the first virtual space to the third virtual space via a singular point that connects the first virtual space and the third virtual space.
- the movement from the second virtual space to the third virtual space may be performed via a singular point that connects the second virtual space and the third virtual space.
- the display process part 120 performs the process which draws the image of a 3rd virtual space at least as a 3rd drawing process. For example, when the third virtual space is connected to the first virtual space via a singular point, for example, an image of the third virtual space is drawn as the third drawing process, In addition to the virtual space image, a process of drawing the first virtual space image is performed. Further, when the third virtual space is linked to the second virtual space via a singular point, for example, an image of the third virtual space is drawn as the third drawing process, In addition to the virtual space image, a process of drawing the second virtual space image is performed.
- the display processing unit 120 returns to the first virtual space through a plurality of user moving bodies corresponding to the plurality of users or a plurality of virtual cameras passing through a place corresponding to the singular point.
- the third drawing process is permitted on the condition. For example, the first user moving body or the first virtual camera corresponding to the first user has returned from the second virtual space to the first virtual space, but the second user movement corresponding to the second user. It is assumed that the body or the second virtual camera has not returned from the second virtual space to the first virtual space. In this case, even when the switching condition (second switching condition) is satisfied, the movement of the first user moving body or the first virtual camera to the third virtual space via the singular point is permitted. The third drawing process is not permitted.
- both the first and second user moving bodies or both the first and second virtual cameras return from the second virtual space to the first virtual space and then the switching condition is satisfied.
- the movement of the first and second user moving bodies or the first and second virtual cameras via the singular point to the third virtual space is permitted, and the third drawing process is permitted.
- the display processing unit 120 determines whether the switching condition is satisfied based on whether the user moving body or the virtual camera has passed the singular point. For example, when the user moving body or virtual camera does not pass the singular point, it is determined that the switching condition is not satisfied, and when the user moving body or virtual camera passes the singular point, the switching condition is satisfied. Judge that
- the display processing unit 120 may determine whether or not the switching condition is satisfied based on user input information or sensor detection information. For example, the display processing unit 120 determines whether the switching condition is satisfied based on user operation information, voice input information, and the like input via the operation unit 160. For example, the determination is made based on input information of a user instructing establishment of the switching condition. Alternatively, the display processing unit 120 determines whether the switching condition is satisfied based on the detection information of the sensor. For example, a sensor is provided on an object corresponding to a singular point, and it is determined whether a switching condition is satisfied based on detection information of the sensor. When the object is a door (gate), it is determined whether the switching condition is satisfied by detecting the open / closed state of the door based on the detection information of the sensor.
- an object corresponding to a singular point is arranged in a play field (play space) in a real space where the user moves.
- a play field play space
- a door object corresponding to a singular point is arranged.
- the display processing unit 120 determines that the user moving object or the virtual camera has passed the singular point when the user passes the location of the object in the real space.
- the passage condition of the singular point is determined by whether or not the user has passed the location of the object in the real space.
- the position information of the user moving body or the virtual camera is set as the position information of the second virtual space, and the second The drawing process is performed.
- the display processing unit 120 may perform processing for setting or not setting a singular point.
- a singular point that has not been set is set to a set state, or a singular point that has been set is set to a non-set state.
- the singularity setting process is, for example, a process of displaying an object corresponding to a singularity in a virtual space such as the first to third virtual spaces or permitting movement between virtual spaces via singularities.
- the singularity non-setting process hides an object corresponding to a singularity in a virtual space such as the first to third virtual spaces, or disallows movement between virtual spaces via singularities. It is processing to do.
- the simulation system of the present embodiment includes a sensation device control unit 117.
- the sensation device control unit 117 controls the sensation device for allowing the user to experience virtual reality.
- the bodily sensation device is a device for causing a user to experience virtual reality by working on a sensory organ other than the visual organ of the user, for example.
- the sensation apparatus is, for example, a blower or a vibration device described later.
- the blower can be realized by, for example, a sirocco fan or a propeller fan.
- the blower may blow hot air or cold air.
- the vibration device can be realized by, for example, a transducer or a vibration motor.
- the body sensation device may be a halogen heater or the like.
- the body sensation device may be realized by a mechanical mechanism such as an air spring or an electric cylinder.
- a mechanical mechanism such as an air spring or an electric cylinder.
- it may be a bodily sensation device that makes the user feel shaking or tilting by a mechanical mechanism such as an air spring or an electric cylinder.
- the sensation device control unit 117 makes the control of the sensation device when the switching condition is not satisfied and the control of the sensation device when the switching condition is satisfied.
- the sensation device control unit 117 does not operate the sensation device when the switching condition is not satisfied (before the switching condition is satisfied), and when the switching condition is satisfied (after the switching condition is satisfied), May be operated.
- the control mode of the sensation apparatus and the type of sensation apparatus to be controlled may be different depending on whether the switching condition is not satisfied or when the switching condition is satisfied. For example, when the switching condition is not satisfied, the sensation apparatus is controlled in the first control mode, and when the switching condition is satisfied, the sensation apparatus is controlled in the second control mode different from the first control mode. Control.
- the first control mode and the second control mode differ in the degree of sensation experienced by the user.
- the first control mode and the second control mode have different blower strengths or different wind temperatures.
- the first control mode and the second control mode differ in vibration intensity and vibration time.
- the switching condition is not satisfied, the first type of sensation apparatus may be controlled, and when the switching condition is satisfied, the second type of sensation apparatus may be controlled. That is, depending on whether or not the switching condition is satisfied, the type of the sensation apparatus to be controlled is varied.
- the simulation system also includes a notification processing unit 116.
- the notification processing unit 116 performs an output process of notification information on collision between users in real space. For example, based on the user's position information acquired by the information acquisition unit 111, a warning notification process for a collision between users is performed. For example, a prediction process is performed as to whether or not the user is in a collision positional relationship (approaching relationship). If such a positional relationship is reached, a notification process for warning that there is a possibility of a collision is performed.
- the prediction process can be realized by determining whether or not there is a possibility that the users have a positional relationship of collision based on the position, speed, acceleration, or the like of each user moving body corresponding to each user.
- the warning notification processing is an image displayed on the HMD 200, sound output from a headphone or a speaker installed in a play field, or a vibration device provided in equipment such as a user's weapon, clothing, or decoration. It can be realized by vibrations caused by the above, or various types of sensation devices (sensation devices using wind, vibration, light, air cannon, sound, etc.) provided in a real space field.
- the simulation system of this embodiment includes an information acquisition unit 111 that acquires position information of the user in real space.
- the information acquisition unit 111 acquires user position information through user viewpoint tracking or the like.
- the moving body processing unit 113 performs processing for moving the user moving body based on the acquired position information, and the display processing unit 120 generates a display image of the HMD 200 worn by the user.
- the user moving body in the virtual space is moved so as to follow the movement of the user in the real space. Then, an image that can be seen from the virtual camera corresponding to the user moving object is generated as a display image of the HMD 200.
- the display processing unit 120 displays the position information of the user moving body or the virtual camera specified by the user position information in the real space. , And set as position information of the first virtual space.
- the first drawing process is performed by associating the position information of the user moving body or the virtual camera as the position information of the first virtual space.
- the first drawing process at least a process of drawing an image in the first virtual space is performed.
- the position information of the user moving body or the virtual camera specified by the position information of the user in the real space is set as the position information of the second virtual space.
- the second drawing process is performed by associating the position information of the user moving body or the virtual camera as the position information of the second virtual space.
- a process of drawing at least an image of the second virtual space is performed.
- the information acquisition unit 111 acquires position information of a user who wears the HMD 200 so as to cover the field of view. For example, the information acquisition unit 111 acquires the position information of the user in real space based on the tracking information of the HMD 200 and the like. For example, the position information of the HMD 200 is acquired as the position information of the user wearing the HMD 200. Specifically, when the user is located in a play field (simulation field, play area) in real space (real world), position information in the play field is acquired. Note that the position information of the user may be acquired by a method of directly tracking a part such as the user or the user's head instead of the tracking process of the HMD 200.
- the virtual camera control unit 114 controls the virtual camera so as to follow the change of the user's viewpoint based on the tracking information of the user's viewpoint information.
- the input processing unit 102 acquires tracking information of viewpoint information of a user wearing the HMD 200.
- tracking information viewpoint tracking information
- viewpoint information that is at least one of the user's viewpoint position and line-of-sight direction is acquired.
- This tracking information can be acquired by performing tracking processing of the HMD 200, for example.
- the user's viewpoint position and line-of-sight direction may be directly acquired by tracking processing.
- the tracking information includes change information of the viewpoint position from the initial viewpoint position of the user (change value of the coordinate of the viewpoint position), and change information of the gaze direction from the user's initial gaze direction (rotation axis in the gaze direction). At least one of the rotation angle change values around the rotation angle). Based on the change information of the viewpoint information included in such tracking information, the user's viewpoint position and line-of-sight direction (information on the user's head position and posture) can be specified.
- a virtual reality simulation process is performed as the game process of the game played by the user.
- the virtual reality simulation process is a simulation process for simulating an event in the real space in the virtual space, and is a process for causing the user to experience the event virtually. For example, a virtual user corresponding to a user in real space or a moving body such as a boarding moving body is moved in the virtual space, or processing for causing the user to experience changes in the environment and surroundings associated with the movement is performed.
- the processing of the simulation system of this embodiment in FIG. 1 can be realized by a processing device such as a PC installed in a facility, a processing device worn by a user, or distributed processing of these processing devices. Or you may implement
- FIG. 2A shows an example of the HMD 200 used in the simulation system of this embodiment.
- the HMD 200 is provided with a plurality of light receiving elements 201, 202, and 203 (photodiodes).
- the light receiving elements 201 and 202 are provided on the front side of the HMD 200, and the light receiving element 203 is provided on the right side of the HMD 200.
- a light receiving element (not shown) is also provided on the left side, upper surface, and the like of the HMD.
- a controller (not shown) is attached to the HMD 200 or the like, and based on this controller, a motion detection process for hands and fingers is realized.
- This controller has, for example, a light emitting unit such as an LED that emits infrared rays, and a plurality of infrared cameras that photograph a hand or a finger illuminated by infrared rays. Based on the image analysis result of the image captured by the infrared camera, the movement of the hand or finger is detected. By providing such a controller, it becomes possible to detect the movement of the user's hand or finger when the door is opened.
- the HMD 200 is provided with a headband 260 and the like so that the user US can stably wear the HMD 200 on the head with a better wearing feeling.
- the HMD 200 is provided with a headphone terminal (not shown). By connecting a headphone 270 (sound output unit 192) to the headphone terminal, for example, processing of three-dimensional sound (three-dimensional audio) is performed.
- the user US can listen to the game sound.
- the operation information of the user US may be input by detecting the head movement or the swinging movement of the user US by the sensor unit 210 of the HMD 200 or the like.
- the user US wears a processing device (backpack PC) (not shown) on his back, for example.
- a processing device is realized by an information processing device such as a notebook PC.
- this processing apparatus and HMD200 are connected by the cable not shown.
- the processing device performs processing for generating an image (game image or the like) displayed on the HMD 200, and the generated image data is sent to the HMD 200 via a cable and displayed on the HMD 200.
- this processing apparatus is also capable of performing each processing (information acquisition processing, virtual space setting processing, moving body processing, virtual camera control processing, game processing, notification processing, and sensation device control of this embodiment.
- each processing of the present embodiment may be realized by a processing device (not shown) such as a PC installed in a facility, or may be realized by distributed processing of the processing device and a processing device worn by the user US. .
- base stations 280 and 284 are installed around the simulation system.
- the base station 280 is provided with light emitting elements 281 and 282, and the base station 284 is provided with light emitting elements 285 and 286.
- the light emitting elements 281, 282, 285, and 286 are realized by LEDs that emit laser (infrared laser or the like), for example.
- the base stations 280 and 284 use these light emitting elements 281, 282, 285, and 286 to emit, for example, a laser beam radially.
- the light receiving elements 201 to 203 and the like provided in the HMD 200 in FIG. 2A receive the laser beams from the base stations 280 and 284, thereby realizing the tracking of the HMD 200 and the position and direction of the head of the user US. (Viewpoint position, line-of-sight direction) can be detected.
- FIG. 3A shows another example of the HMD 200.
- a plurality of light emitting elements 231 to 236 are provided for the HMD 200. These light emitting elements 231 to 236 are realized by LEDs, for example.
- the light emitting elements 231 to 234 are provided on the front side of the HMD 200, and the light emitting element 235 and the light emitting element 236 (not shown) are provided on the back side.
- These light emitting elements 231 to 236 emit (emit) light in a visible light band, for example. Specifically, the light emitting elements 231 to 236 emit light of different colors.
- the imaging unit 150 shown in FIG. 3B is installed in at least one place around the user US (for example, the front side, the front side, the rear side, or the like).
- Image up to 236 lights that is, spot images of these light emitting elements 231 to 236 are reflected in the captured image of the imaging unit 150.
- the tracking of the user's US head (HMD) is implement
- the imaging unit 150 includes first and second cameras 151 and 152, and the first and second cameras 151 and 152 of the first and second cameras 151 and 152 are provided.
- the position of the head of the user US in the depth direction can be detected.
- the rotation angle (line of sight) of the head of the user US can also be detected. Therefore, by using such an HMD 200, when the user US faces in any direction of all 360 degrees around the image, the image (the user's virtual space in the virtual space (virtual three-dimensional space)) Image viewed from a virtual camera corresponding to the viewpoint) can be displayed on the display unit 220 of the HMD 200.
- the light emitting elements 231 to 236 infrared LEDs instead of visible light may be used.
- the position or movement of the user's head may be detected by another method such as using a depth camera.
- the tracking processing method for detecting the user's viewpoint position and line-of-sight direction is not limited to the method described with reference to FIGS.
- the tracking process may be realized by a single unit of the HMD 200 using a motion sensor or the like provided in the HMD 200. That is, tracking processing is realized without providing external devices such as the base stations 280 and 284 in FIG. 2B and the imaging unit 150 in FIG. Or you may detect viewpoint information, such as a user's viewpoint position and a gaze direction, by various viewpoint tracking methods, such as well-known eye tracking, face tracking, or head tracking.
- the display unit on which the image generated by the simulation system is displayed is not limited to the HMD, and may be a normal display used in a home game device, a business game device, or a PC.
- the determination target of the passage of the singular point, the setting target of the position information of the virtual space, etc. is mainly a user moving body or a user moving body of the virtual camera will be described as an example.
- the determination target of the passage of the object, the setting object of the position information of the virtual space, and the like may be a virtual camera.
- the user moving object corresponding to the user is described as a user character.
- the method of the present embodiment includes various games (virtual experience game, battle game, RPG, action game, competition game, sports game, horror experience game, simulation game of vehicles such as trains and airplanes, puzzle games, communication games, Alternatively, it can be applied to music games and the like, and can be applied to other than games.
- VR virtual reality
- This game is a virtual reality experience game that can move in a plurality of virtual spaces (virtual worlds).
- FIG. 4 and 5 are explanatory diagrams of the play field FL in the room used in the simulation system of this embodiment.
- FIG. 4 is a perspective view illustrating the play field FL
- FIG. 5 is a top view.
- the play field FL (play area, play space) simulating a room, a door DR, a desk DK, a bookshelf BS, etc. are arranged, and windows WD1, WD2 are provided on the wall.
- the user enters the play field FL simulating this room and enjoys a VR game.
- a plurality of users US1 and US2 enter the room, and these two users US1 and US2 can enjoy a VR game.
- Each user US1, US2 wears a processing device (backpack PC) on his / her back, for example, and an image generated by this processing device is displayed on HMD1, HMD2 (head-mounted display device).
- a management processing device (not shown) is arranged in the play field FL, and the management processing device performs data synchronization processing (communication processing) between the processing devices worn by the users US1 and US2. Done. For example, a synchronization process for displaying a user character corresponding to the user US2 (user moving body in a broad sense) on the HMD1 of the user US1 and displaying a user character corresponding to the user US1 on the HMD2 of the user US2 is performed.
- control processing of the sensation apparatus can be performed.
- an operator is waiting in the room, and performs operations of management processing devices, helping the users US1 and US2 to install the HMD1, HMD2 and jackets, operation work and guidance work for game progress.
- the base stations 280 and 284 described with reference to FIG. 2B are installed in the room of the play field FL, and the location information of the users US1 and US2 can be acquired using these base stations 280 and 284. It has become.
- the play field FL is provided with a blower BL and a vibration device VB, which are sensation devices for allowing the user to experience virtual reality.
- the vibration device VB is realized by, for example, a transducer installed under the floor of a room.
- the real-world door DR when the real-world door DR is opened, another world (a world different from the room landscape) spreads beyond the door DR.
- the user can experience a virtual reality that can pass through the door DR and go to another world.
- the images of the first virtual space corresponding to the room are displayed on the HMD1 and HMD2 of the users US1 and US2.
- an object corresponding to an object placed / installed in a room is placed in the first virtual space (first object space).
- first virtual space For example, objects corresponding to the door DR, the desk DK, the bookshelf BS, and the windows WD1 and WD2 are arranged.
- the image seen from the virtual camera (1st, 2nd virtual camera) corresponding to the viewpoint (1st, 2nd viewpoint) of user US1, US2 is produced
- the users US1 and US2 who move while wearing the HMD1 and HMD2 can experience virtual reality as if they were walking around a real room.
- FIG. 6A when the user (US1, US2) opens the door DR for the first time, the other side of the door DR remains in the room, and the area of the door DR (in a broad sense) In the region corresponding to the singular point, a room image is displayed. Then, after the user closes the door DR as shown in FIG. 6B, when the door DR is opened again as shown in FIG. 6C, the other side of the door DR changes to an ice country. That is, as shown in FIG. 7, an image of the ice country that is an image of the second virtual space VS2 (second object space) is displayed in the region of the door DR (door opening region). At this time, as shown in FIG.
- an image of a room such as a bookshelf BS and windows WD1 and WD2 is displayed around the door DR. That is, an image of the ice country which is an image of the second virtual space VS2 is displayed in the region of the door DR (region corresponding to the singular point), while the first region is displayed in the region other than the door DR. A room image which is an image of the virtual space VS1 is displayed.
- the user character UC1 (in a broad sense) corresponding to the user US1
- the user moving body moves from the room that is the first virtual space VS1 to the ice country that is the second virtual space VS2.
- the user character UC1 is a character (display object) that moves in the virtual space as the user US1 moves in the real space, and is also called an avatar.
- the present embodiment mainly deals with the case where the position information of the user US1 in the real space is acquired and the user character UC1 is moved in the virtual space (first and second virtual spaces) based on the acquired position information.
- the present embodiment is not limited to this.
- the user character UC1 may be moved in the virtual space (first and second virtual spaces) based on operation information from the operation unit 160 (game controller or the like) in FIG.
- the movement of the hand or finger of the user US1 (US2) is detected by, for example, a leap motion process. And based on this detection result, the motion process which moves the site
- the user US1 opens the door DR, the user US1 can visually recognize the movement of his or her hand or finger by looking at the movement of the hand or finger part of the user character UC1.
- FIG. 9 is an example of an image displayed when the user character UC1 that has moved to the ice country as the second virtual space VS2 looks back to the door DR side as shown in FIG.
- an image of a room that is an image of the first virtual space VS ⁇ b> 1 is displayed in the region of the door DR.
- an image of the ice country that is an image of the second virtual space VS2 is displayed around the door DR. That is, an image of a room that is an image of the first virtual space VS1 is displayed in the area of the door DR, while an area of the ice that is an image of the second virtual space VS2 is displayed in an area other than the door DR. Is displayed.
- the user character UC1 When the user character UC1 moves toward the door DR and passes through the door DR again, the user character UC1 can return to the room that is the first virtual space VS1. That is, by passing the door DR in the first advancing direction in FIG. 7, it is possible to move (warp) from the first virtual space VS1 (room) to the second virtual space VS2 (ice country). On the other hand, in FIG. 9, by passing through the door DR in the second traveling direction different from the first traveling direction, the second virtual space VS2 (land of ice) changes to the first virtual space VS1 (room). Can move. That is, the user character UC1 can freely go back and forth between the first virtual space VS1 and the second virtual space VS2 through the door DR (a place corresponding to a singular point).
- FIG. 10A and FIG. 10B show a situation when the users US1 and US2 pass through the door DR.
- the user characters UC1 and UC2 corresponding to the users US1 and US2 are located in the ice country that is the second virtual space. That is, the position information of the user characters UC1 and UC2 is set as the position information of the second virtual space.
- a blower BL and a vibration device VB which are sensation devices, are installed.
- the blower BL is installed on the front side of the users US1 and US2, and the vibration device VB is installed under the floor around the users US1 and US2.
- the blower BL starts to blow air, and the users US1 and US2 are exposed to wind (cold wind).
- a sound that makes the snowstorm “hue” is output from the speaker of the headphones worn by the users US1 and US2.
- the users US1 and US2 can feel a virtual reality as if they came to a real ice country.
- the range in which the users US1 and US2 can move is the same as the room area, and moves beyond the room wall in FIG. I can't do it.
- notification information for warning the HMD1 and HMD2 of a collision with the wall or the like is displayed so that the collision with the wall or the like can be avoided.
- the scenery of the ice country spreads at the tip of the user opening the door, the sun and ice whiteness, You can see a completely different landscape.
- the sound of wind blowing with dazzlingness improves virtual reality.
- the user can step into the end of the door.
- the user can look around and see the sea, iceberg, clear sky sun and diamond dust, and enjoy the scenery of the ice country. Also, as shown in FIGS. 10A and 10B, the sea extends beyond the cliff at one end of the door, and the user cools the liver to that height.
- the user can enjoy a horror experience because the foot is vibrated by the vibration device and the cliff in front of the user collapses.
- the user frequently goes back and forth between two virtual spaces (virtual worlds), and one person goes to the front of the door, and the other turns around behind the door and confirms that the other party cannot be seen.
- Standing sideways in the middle of the door you can experience the difference in field of view and sound between the right and left halves.
- the door DR is closed as shown in FIG. 6D, and again as shown in FIG. 6E.
- the other side of the door DR changes to the space above the train. That is, as shown in FIG. 11A, an image when the user is on the train TR, which is an image of the third virtual space VS3, is displayed in the area of the door DR.
- images of rooms such as the bookshelf BS, the windows WD1, and WD2 are displayed in an area other than the door DR.
- the virtual space switching process may not be performed when the door is opened slightly and then closed immediately. For example, depending on the user, when an image as shown in FIG. 11A is displayed, the user may be surprised and immediately close the door. In such a case, it is not desirable that the image of the train scene as shown in FIG. 11A is not displayed when the door is opened next time.
- an image as shown in FIG. 12 is displayed.
- an image of a room that is an image of the first virtual space VS ⁇ b> 1 is displayed in the region of the door DR.
- an image on a train an image of a tunnel, a train, etc.
- an image of the third virtual space VS3 is displayed around the door DR. That is, an image of a room that is an image of the first virtual space VS1 is displayed in the area of the door DR, while an area above the train that is an image of the third virtual space VS3 is displayed in an area other than the door DR.
- the image at is displayed.
- an effect image is displayed in which sparks are scattered when the upper end of the door DR hits the roof of the tunnel TN.
- the user character UC1 moves toward the door DR and passes through the door DR again, the user character UC1 can return to the room that is the first virtual space VS1. 6A to 6E, every time the door DR is opened and closed, the virtual space on the other side of the door DR is switched, such as a room, an icy country, or a train. Yes.
- the user can enjoy a thrilling experience that is suddenly thrown into a fast and dangerous place.
- the train repeats entering and exiting the tunnel, and the top edge of the door hits the tunnel roof, sparks are scattered and the thrill feeling is further improved.
- the vibration of the floor can give you the feeling of being really on the train, and you can give a feeling of speed with the blower.
- the first virtual space and the second virtual space that is linked to the first virtual space via a singular point are set as the virtual space.
- the first virtual space is, for example, a virtual space corresponding to a room
- the second virtual space is a virtual space corresponding to an ice country.
- These first and second virtual spaces are connected via a singular point corresponding to the door DR.
- the singularity is a point at which the law cannot be applied under the law that is the reference.
- the rule that the user character moves only in the first virtual space is Applied.
- the singular point of the present embodiment is a point where such a law (standard) is not applied and the law is not followed.
- the switching condition is established by passing a singular point
- the law of moving in the first virtual space is not applied to the user character, and the user character is defined as the first virtual space. It moves in a different second virtual space (Iceland).
- the position information of the user character is set as the position information of the first virtual space.
- the position information of the user character is set as the position information of the first virtual space.
- FIG. 13A information on the position P1 of the user character UC1 is set as position information on the first virtual space VS1 (room).
- a 1st drawing process the process which draws the image of the 1st virtual space VS1 (room) at least is performed. That is, in the first virtual space VS1, an image that can be seen from the virtual camera (user's viewpoint) is generated.
- the image of the second virtual space VS2 (ice country) Is generated to generate an image in which the image of the second virtual space VS2 is displayed in the region of the door DR, which is a region corresponding to the singular point. That is, as shown in FIG. 7, an image is generated in which an image of the ice country is displayed in the area of the door DR and a room image is displayed in the area other than the door DR.
- the user character UC1 (virtual camera) is switched by passing the location of the door DR corresponding to the singular point (SG) in the first traveling direction D1.
- the condition is met. That is, in FIG. 13A, the switching condition is established when the user character UC1 moves from the position P1 through the door DR to the position P2.
- the position information of the user character UC1 (virtual camera) is set as the position information of the second virtual space VS2 (ice country).
- information on the positions P2 and P3 of the user character UC1 is set as position information on the second virtual space VS2 (ice country).
- the process which draws the image of the 2nd virtual space VS2 at least is performed. That is, an image that can be seen from the virtual camera (user viewpoint) is generated in the second virtual space VS2.
- the image of the first virtual space VS ⁇ b> 1 (room). Is generated to generate an image in which the image of the first virtual space VS1 is displayed in the region of the door DR, which is a region corresponding to the singular point. That is, as shown in FIG. 9, an image is generated in which an image of a room is displayed in the area of the door DR and an image of the ice country is displayed in an area other than the door DR.
- the user character UC1 (virtual camera) passes through the location of the door DR (location corresponding to the singular point SG) in the first traveling direction D1, and the virtual camera It is assumed that the line-of-sight direction SL faces the direction opposite to the first traveling direction D1. That is, in FIG. 13A, after the user character UC1 passes through the door DR in the first traveling direction D1, the user DR looks back toward the door DR, and the visual line direction SL of the virtual camera is directed toward the door DR. .
- the line-of-sight direction SL is, for example, a direction opposite to the first traveling direction D1. In such a case, in the present embodiment, as shown in FIG. 9, an image in which an image of the first virtual space VS1 (room) is displayed in the region of the door DR (region corresponding to the singular point SG) is generated. .
- FIG. 13A it is assumed that the user character UC1 moves from the position P2 to the position P3 in the second virtual space VS2, and the visual line direction SL of the virtual camera is directed toward the door DR.
- the position P2 moves from the position P2 to the position P3 on the back side of the door DR and the line-of-sight direction SL is directed toward the door DR.
- the room image is not displayed in the area of the door DR.
- the line-of-sight direction SL of the virtual camera is directed toward the door DR at the position P2 in FIG. 13A
- the room on the other side of the door DR has an opening area as shown in FIG.
- the reason why such an image is displayed is that in the present embodiment, the first virtual space and the second virtual space are connected discontinuously via the singular point SG. Therefore, as shown in FIG. 13A, when the room position P1, which is the movement source position, is viewed from the position P2, the room image can be viewed through the opening area of the door DR. However, at the position P3 on the back side of the door DR, an image is displayed in which only the frame of the door DR (open door) stands on the glacier. In this way, it is possible to experience a virtual reality experience such as warping from the first virtual space to the second virtual space, which is a different dimension from the first virtual space. Realization of virtual reality becomes possible.
- FIG. 7 and 9 can be generated by the following method, for example.
- a pixel in an area other than the door DR an area other than the area corresponding to the singular point
- a pixel in the door DR area an area corresponding to the singular point
- FIG. 13B shows an example in which the user character UC1 does not pass through the location of the door DR (singular point SG) and the switching condition is not satisfied.
- the switching condition is not satisfied, so the virtual space is not switched, and the information of the position P2 is the first virtual space VS1.
- Set as (room) location information Accordingly, even if the visual line direction SL of the virtual camera is directed toward the door DR at the position P2, a room image is displayed in the opening of the door DR. That is, an image in which only the frame of the door DR stands on the floor of the room is displayed.
- FIG. 13B even if the user character UC1 moves from the position P2 to the position P3 and the viewing direction SL of the virtual camera is directed toward the door DR, only the frame of the door DR is on the floor of the room. A standing image is displayed.
- the second traveling direction is different from the first traveling direction D1.
- the first traveling direction D1 forward direction
- Is passing through the door DR in the second traveling direction D2 negative direction
- the opposite direction reverse polarity direction
- information on the position P4 of the user character UC1 is set as position information on the first virtual space VS1 (room).
- the process which draws the image of the 1st virtual space VS1 at least is performed. That is, a process of drawing an image of a room that is the first virtual space VS1 is performed.
- a process of drawing an image of the second virtual space VS2 is performed, and the door DR (singular point SG ), An image in which the image of the second virtual space VS2 is displayed is generated. For example, as shown in FIG.
- the line-of-sight direction SL of the virtual camera of the user character UC1 at the position P4 is facing the direction opposite to the second traveling direction D2 and facing the door DR.
- an image in which an image of the second virtual space VS2 (ice country) is displayed in the area of the door DR is generated.
- the second virtual space VS2 has moved from the position P2 to the position P4 without passing through the door DR.
- the position P4 remains at the position of the second virtual space VS2. That is, in FIG. 13A, the position P4 is switched to the position of the first virtual space VS1, but in FIG. 13B, the same position P4 remains at the position of the second virtual space VS2.
- the user character (virtual camera) position information is set as position information of the third virtual space VS3 (on the train), and at least an image of the third virtual space VS3 is drawn as the third drawing process.
- the user character UC1 passes through the door DR (singular point SG) in the first traveling direction D1 (positive direction) from the position P1 of the first virtual space VS1 to the position P2. Has moved. Thereby, since the switching condition by passage of the door DR is established, the position P2 is set as the position of the second virtual space VS2 (ice country). Then, the user character UC1 passes through the door DR in the second traveling direction D2 (negative direction) from the position P2 of the second virtual space VS2 and returns to the position P1. Thereby, since the switching condition by passage of the door DR is established, the position P1 is set as the position of the first virtual space VS1 (room).
- the door DR is opened and closed in FIG. 15B.
- the world beyond the door DR changes from the second virtual space VS2 (ice land) to the third virtual space VS3 (on the train).
- an image as shown in FIG. 11A is displayed.
- the user character UC1 has moved from the position P1 of the first virtual space VS1 to the position P2 through the door DR.
- the position P2 is set as the position of the third virtual space VS3 (on the train).
- the third drawing process at least an image of the third virtual space VS3 is drawn, thereby generating an image as shown in FIG. 11B, for example.
- an image as shown in FIG. 12 is generated. That is, as the third drawing process, a process of drawing the image of the first virtual space VS1 (room) in addition to the image of the third virtual space VS3 (on the train) is performed, and the region of the door DR An image in which an image of one virtual space VS1 is displayed is generated. In this way, not only the switching process between the first virtual space and the second virtual space, but also a switching process between the first virtual space and the third virtual space, for example, can be realized. Alternatively, switching processing between the second virtual space and the third virtual space may be performed.
- FIG. 17A a plurality of users corresponding to the user characters UC1 and UC2 are playing. Then, the user character UC1 passes through the door DR from the first virtual space VS1 and moves to the second virtual space VS2, and then passes again through the door DR and returns to the first virtual space VS1. Yes. On the other hand, after the user character UC2 passes through the door DR and moves to the second virtual space VS2, the user character UC2 remains in the second virtual space VS2.
- the third drawing in the third virtual space is performed on the condition that both of the user characters UC1 and UC2 have returned to the first virtual space VS1. Allow processing.
- FIG. 18A not only the user character UC1 but also the user character UC2 passes through the door DR from the position of the second virtual space VS2 and returns to the first virtual space VS1. Then, after the user characters UC1 and UC2 return to the first virtual space VS1, the door DR is opened and closed.
- FIG. 18B when the user characters UC1 and UC2 pass the door DR and the switching condition is satisfied, the positions of the user characters UC1 and UC2 are in the third virtual space VS3 (train of the train). Set as the top position. Then, the third drawing process in the third virtual space VS3 is performed, and images as shown in FIGS. 11B and 12 are generated. In this way, it becomes possible to prevent a situation in which some of the plurality of user characters corresponding to the plurality of users are left in the second virtual space and the game progresses, etc. Appropriate and smooth game progress can be realized.
- voice information “I want to go to the iceland” is input as input information of the user US1 using a microphone or the like. Based on such input information of the user US1, it may be determined whether or not a switching condition is satisfied, and for example, a virtual space switching process may be performed. In this way, the virtual space switching process can be realized with a simple effort of only inputting input information.
- the user input information used for determining the switching condition may be, for example, operation information input by a game controller (operation unit in a broad sense).
- the switching condition is determined using the input information based on the movement of the user's hand or finger as the user's input information. Good.
- the movement of the user's head may be detected based on an HMD sensor unit or the like and used as user input information.
- the switching condition it may be determined whether the switching condition is satisfied based on the detection information of the sensor.
- a sensor SE is provided for the door DR.
- the opening / closing state of the door DR may be determined based on the detection information of the sensor SE to determine whether the switching condition is satisfied.
- the sensor SE for example, a light receiving element as described with reference to FIG. In this way, the open / closed state of the door DR can be determined by detecting the light from the base stations 280 and 284 provided in the room as shown in FIG. 4 by the light receiving element as the sensor SE. Therefore, it is possible to determine the switching condition with a simple process.
- a motion sensor composed of an acceleration sensor, a gyro sensor, or the like may be used as the sensor SE.
- the open / close state of the door DR may be determined to determine whether or not the virtual space switching condition is satisfied.
- various types of sensors can be employed as the sensor SE.
- an object corresponding to a singular point is arranged in the play field FL in the real space where the users (US 1, US 2) move.
- a door DR is arranged as an object corresponding to a singular point.
- the user character virtual camera
- the door object simulating the door of the real space in the virtual space the door of the real space and the object of the virtual space door can be properly associated with each other, and the virtual reality of the user Can be further improved.
- the door in the real space can be opened and closed.
- the door object in the virtual space is also opened and closed, and the virtual reality of the user can be greatly improved.
- FIG. 20A As shown by B1, only the head and upper body part of the user character UC1 pass through the door DR and move to the second virtual space VS2 (ice country). .
- the lower body part of the user character UC1 remains in the first virtual space VS1 (room).
- the head and upper body of the user character UC1 appear to disappear, and it is strange that the other side of the door DR is a different world.
- the appearance is further different. Further, as shown in FIG.
- the position information of the lower body part such as the foot It is desirable to detect. Also, as shown in FIGS. 20A and 20B, the region corresponding to the singular point only needs to pass at least the virtual camera corresponding to the user's viewpoint, and the entire user character passes. There is no need to be.
- the user character UC1 is located in the second virtual space VS2 (ice country), and unlike FIG. 8, the door DR corresponding to the singular point SG is not displayed.
- the user US1 corresponding to the user character UC1 inputs, for example, a voice “Appearance door”. Thereby, the process of setting the singular point SG is performed, and the door DR corresponding to the singular point SG appears in the second virtual space VS2 and is displayed as shown in FIG.
- FIG. 21 (B) the user US1 inputs a voice of “disappearing door”, for example, to perform the process of unsetting the singular point SG, for example, as shown in FIG. 21 (A), The door DR may be hidden.
- the setting or non-setting of singular points can be arbitrarily switched based on user input information or the like, for example, an object corresponding to a singular point can be displayed or non- It becomes possible to display. As a result, for example, it is possible to hide singular points and the like, and more various game processes and game effects can be realized.
- the setting and non-setting of singular points may be switched according to, for example, the result of game processing, the game level of the user, etc., in addition to the user input information, for example.
- processing for controlling the sensation device for allowing the user to experience virtual reality is performed, and control of the sensation device when the switching condition is not satisfied and control of the sensation device when the switching condition is satisfied May be different.
- the user character UC1 moves beyond the door DR without passing through the door DR, and the virtual space switching condition is not satisfied. In this case, sensation devices such as the blower BL and the vibration device VB are not operated.
- the user character UC1 passes through the door DR and moves to the other side of the door DR, and the virtual space switching condition is satisfied. In this case, sensation devices such as the blower BL and the vibration device VB are operated. For example, control such as blowing cool air by the blower BL or vibrating the floor by the vibration device VB is performed. By doing so, as described in FIG. 10A and FIG. 10B, the virtual space is switched from the first virtual space (room) to the second virtual space (ice country). It is possible to let the user experience using a sensation apparatus such as the blower BL and the vibration device VB.
- control mode of the sensation apparatus and the type of sensation apparatus to be controlled may be different depending on whether the switching condition is not satisfied or when the switching condition is satisfied.
- the power of the blower BL may be varied, or the vibration level of the vibration device VB may be varied.
- the first sensation device for the first virtual space and the second sensation device for the second virtual space are prepared, and the first sensation device is operated when the switching condition is not satisfied.
- the second sensation apparatus may be operated.
- notification information output processing for collision between users in real space is performed.
- FIG. 23A a plurality of users US1, US2 corresponding to the user characters UC1, UC2 are playing a game.
- the user character UC1 moves through the door DR to move to the second virtual space VS2, but the user character UC2 does not pass through the door DR and enters the first virtual space VS1.
- FIG. 23B the user character UC1 moves in the second virtual space VS2 and approaches the user character UC2.
- FIG. 24 is an example of a display image of the HMD1 of the user US1. In this display image, a hand, a finger, and the like of the user US1 are displayed.
- the user character UC2 is not normally displayed on the HMD1 of the user US1.
- the user character UC2 is displayed as thin as, for example, a ghost character.
- the user character UC2 that is translucently synthesized with the background or the like with a given translucency is displayed.
- the user US1 can visually recognize that the user US2 corresponding to the user character UC2 is nearby, and can avoid a collision between users in the real world.
- the HMD2 of the user US2 also displays the user character UC1 corresponding to the user US1 as thin as a ghost character by, for example, semi-transparent composition with the background or the like. By doing so, the user US2 can visually recognize that the user US1 corresponding to the user character UC1 is nearby, and can avoid a collision between users in the real world.
- the display mode of the user character UC2 in FIG. 24 may be changed according to the risk of collision. For example, as the distance between the user characters UC1 and UC2 approaches, or as the approach speed of the user characters UC1 and UC2 increases, the display of the user character UC2 is gradually darkened to increase the degree of visual recognition. May be.
- information for notifying a collision warning may be superimposed on the display image of the HMD. Or you may alert
- various types of output processing can be assumed as the notification information output processing.
- FIG. 25A after the user characters UC1 and UC2 have moved to the second virtual space VS2, the user character UC2 has moved to a position behind the door DR. As shown in FIG. 25B, the user character UC2 moves from the position on the back side of the door DR toward the user character UC1 and passes through the door DR.
- the visual line direction SL of the virtual camera of the user character UC1 faces the direction of the door DR.
- an image in which the region of the door DR becomes an image of the room is displayed.
- FIG. 25B when the user character UC2 in the second virtual space VS2 passes through the door DR, the user character UC2 suddenly pops out from the room image in FIG. Correct images will be displayed.
- FIG. 26A information on the position PR of the user US1 in the real space is acquired.
- the information on the position PR in the real space can be acquired by, for example, the tracking process of the HMD 1 described with reference to FIGS. 2 (A) to 3 (B).
- a process of moving the user character UC1 in the virtual space shown in FIG. That is, the position PR of the user US1 in the real space shown in FIG. 26A and the position PV of the user character UC1 in the virtual space shown in FIG. 26B are associated with each other so as to be linked to the movement of the position PR.
- the position PV is changed to move the user character UC1.
- the display image of HMD1 which user US1 wears is generated by the method of this embodiment.
- the information on the position PV of the user character UC1 (virtual camera) specified by the information on the position PR of the user US1 in the real space is used as the position information on the first virtual space.
- the information on the position PV of the user character UC1 (virtual camera) specified by the information on the position PR of the user US1 in the real space is set as the position information on the second virtual space.
- FIG. 13 (A) when passing through the door DR and moving to the positions P2, P3, and as shown in FIG. 13 (B), without passing through the door DR, the positions P2, P3.
- the image displayed on the HMD 1 is different depending on the movement. That is, even in the real space at the same positions P2 and P3, the image in the second virtual space VS2 (Iceland) is displayed on the HMD1 in FIG. 13A, but FIG. Then, an image in the first virtual space VS1 (room) is displayed on the HMD1. Therefore, it becomes possible to provide the user with a mysterious VR experience that could not be realized with conventional simulation systems.
- step S1 it is determined whether or not the switching condition is satisfied. For example, when the user character does not pass through the place corresponding to the singular point and the switching condition is not satisfied, the position information of the user character or the virtual camera is set as the position information of the first virtual space.
- Step S2 position information is set as shown in FIG.
- step S3 a process of drawing the image of the second virtual space is performed to generate an image in which the image of the second virtual space is displayed in the region corresponding to the singular point.
- an image is generated in which an image of the ice country that is the second virtual space VS2 is displayed in the region of the door DR that is a region corresponding to the singular point.
- the position information of the user character or the virtual camera is set as the position information of the second virtual space (step S4).
- position information is set as shown in FIG.
- a process of drawing the image of the first virtual space is performed to generate an image in which the image of the first virtual space is displayed in the region corresponding to the singular point (step S5).
- an image in which an image of a room that is the first virtual space VS1 is displayed in the region of the door DR that is a region corresponding to the singular point is generated.
- the present invention is not limited, and techniques, processes, and configurations equivalent to these are also included in the scope of the present invention.
- the present invention can be applied to various games. Further, the present invention can be applied to various simulation systems such as a business game device, a home game device, or a large attraction system in which a large number of users participate.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
This simulation system includes a virtual space setting unit, a moving body processing unit, and a display processing unit. The virtual space setting unit sets a first virtual space and a second virtual space as virtual spaces. Until a switching condition is established, the display processing unit sets positional information about a user's moving body or a virtual camera as positional information in the first virtual space, performs a process for rendering an image of the second virtual space such that the image of the second virtual space is added to an image of the first virtual space, and generates an image in which the image of the second virtual space is displayed in an area corresponding to a singular point. When the switching condition has been established, the display processing unit sets the positional information about the user's moving body or the virtual camera as positional information in the second virtual space, performs a process for rendering an image of the first virtual space such that the image of the first virtual space is added to an image of the second virtual space, and generates an image in which the image of the first virtual space is displayed in an area corresponding to a singular point.
Description
本発明は、シミュレーションシステム、画像処理方法及び情報記憶媒体等に関する。
The present invention relates to a simulation system, an image processing method, an information storage medium, and the like.
従来より、仮想空間において仮想カメラから見える画像を生成するシミュレーションシステムが知られている。例えば仮想カメラから見える画像をHMD(頭部装着型表示装置)に表示して、バーチャルリアリティ(VR)を実現するシミュレーションシステムの従来技術としては、特許文献1等に開示される技術がある。
Conventionally, a simulation system that generates an image that can be seen from a virtual camera in a virtual space is known. For example, as a conventional technique of a simulation system that realizes virtual reality (VR) by displaying an image viewed from a virtual camera on an HMD (head-mounted display device), there is a technique disclosed in Patent Document 1 or the like.
このようなシミュレーションシステムでは、複数のオブジェクトが配置設定される仮想空間として、1つの仮想空間だけが設定される。そしてユーザは、自身に対応するユーザキャラクタが、当該1つの仮想空間で移動することで、バーチャルリアリティの世界を楽しむことになる。しかしながら、従来は、例えば複数の仮想空間を移動可能な仮想現実を実現するシミュレーションシステムについては提案されていなかった。
In such a simulation system, only one virtual space is set as a virtual space in which a plurality of objects are arranged and set. And a user will enjoy the world of virtual reality because the user character corresponding to himself moves in the one virtual space. However, conventionally, for example, a simulation system that realizes a virtual reality capable of moving in a plurality of virtual spaces has not been proposed.
本発明の幾つかの態様によれば、複数の仮想空間を移動可能な仮想現実を実現できるシミュレーションシステム、画像処理方法及び情報記憶媒体等を提供できる。
According to some aspects of the present invention, it is possible to provide a simulation system, an image processing method, an information storage medium, and the like that can realize a virtual reality that can move in a plurality of virtual spaces.
本発明の一態様は、オブジェクトが配置される仮想空間の設定処理を行う仮想空間設定部と、前記仮想空間において、ユーザに対応するユーザ移動体を移動させる処理を行う移動体処理部と、前記仮想空間において仮想カメラから見える画像の描画処理を行う表示処理部と、を含み、前記仮想空間設定部は、前記仮想空間として、第1の仮想空間と、前記第1の仮想空間に対して特異点を介して結びつけられる第2の仮想空間とを設定し、前記表示処理部は、所与の切替条件が成立する前においては、前記ユーザ移動体又は前記仮想カメラの位置情報を、前記第1の仮想空間の位置情報として設定し、第1の描画処理として、前記第1の仮想空間の画像に加えて前記第2の仮想空間の画像を描画する処理を行って、前記特異点に対応する領域に前記第2の仮想空間の画像が表示される画像を生成し、前記ユーザ移動体又は前記仮想カメラが前記特異点に対応する場所を第1の進行方向で通過することで前記切替条件が成立した場合には、前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第2の仮想空間の位置情報として設定し、第2の描画処理として、前記第2の仮想空間の画像に加えて前記第1の仮想空間の画像を描画する処理を行って、前記特異点に対応する領域に前記第1の仮想空間の画像が表示される画像を生成し、前記表示処理部は、前記仮想カメラの視線方向が、前記第1の進行方向の反対方向側を向いた場合に、前記特異点に対応する領域に前記第1の仮想空間の画像が表示される画像を生成するシミュレーションシステムに関係する。また本発明は、上記各部としてコンピュータを機能させるプログラム、又は該プログラムを記憶したコンピュータ読み取り可能な情報記憶媒体に関係する。
One aspect of the present invention includes a virtual space setting unit that performs a setting process of a virtual space in which an object is arranged, a moving object processing unit that performs a process of moving a user moving object corresponding to a user in the virtual space, A display processing unit that performs drawing processing of an image viewed from a virtual camera in the virtual space, wherein the virtual space setting unit is specific to the first virtual space and the first virtual space as the virtual space A second virtual space that is connected via a point, and the display processing unit displays the position information of the user moving object or the virtual camera before the given switching condition is established. Corresponding to the singular point by performing a process of drawing the image of the second virtual space in addition to the image of the first virtual space as the first drawing process. Territory Generating the image in which the image of the second virtual space is displayed, and the switching condition is satisfied when the user moving body or the virtual camera passes through the place corresponding to the singular point in the first traveling direction. In this case, the position information of the user moving body or the virtual camera is set as the position information of the second virtual space, and is added to the image of the second virtual space as a second drawing process. The first virtual space image is rendered to generate an image in which the first virtual space image is displayed in a region corresponding to the singular point, and the display processing unit includes the virtual camera The present invention relates to a simulation system that generates an image in which an image of the first virtual space is displayed in a region corresponding to the singular point when the line-of-sight direction is directed in the direction opposite to the first traveling direction. . The present invention also relates to a program that causes a computer to function as each of the above-described units, or a computer-readable information storage medium that stores the program.
本発明の一態様によれば、ユーザ移動体が移動する仮想空間において、仮想カメラから見える画像が生成される。また仮想空間として第1の仮想空間と第2の仮想空間が設定される。そして切替条件が成立する前においては、ユーザ移動体又は仮想カメラの位置情報が第1の仮想空間の位置情報として設定され、第1の仮想空間の画像に加えて第2の仮想空間の画像が描画されて、特異点に対応する領域に第2の仮想空間の画像が表示される画像が生成される。一方、切替条件が成立すると、ユーザ移動体又は仮想カメラの位置情報が第2の仮想空間の位置情報として設定され、第2の仮想空間の画像に加えて第1の仮想空間の画像が描画されて、特異点に対応する領域に第1の仮想空間の画像が表示される画像が生成される。そして本発明の一態様では、仮想カメラの視線方向が、第1の進行方向の反対方向側を向いた場合に、特異点に対応する領域に第1の仮想空間の画像が表示される画像が生成されるようになる。このようにすれば、ユーザ移動体又は仮想カメラが特異点に対応する領域を通過することで、ユーザ移動体又は仮想カメラの位置が第1の仮想空間の位置から第2の仮想空間の位置に切り替わるようになる。そしてユーザ移動体又は仮想カメラが第1の仮想空間に位置する場合には、第1の仮想空間の画像に加えて第2の仮想空間の画像が描画されて、特異点に対応する領域に第2の仮想空間の画像が表示されるようになる。一方、ユーザ移動体又は仮想カメラが第2の仮想空間に位置する場合には、第2の仮想空間の画像に加えて第1の仮想空間の画像が描画されて、特異点に対応する領域に第1の仮想空間の画像が表示されるようになる。これにより、複数の仮想空間を移動可能な仮想現実を実現できるシミュレーションシステム等の提供が可能になる。
According to one aspect of the present invention, an image that can be seen from a virtual camera is generated in a virtual space in which a user moving object moves. A first virtual space and a second virtual space are set as the virtual space. Before the switching condition is satisfied, the position information of the user moving body or the virtual camera is set as the position information of the first virtual space, and the image of the second virtual space is added to the image of the first virtual space. An image is generated in which the image of the second virtual space is displayed in the region corresponding to the singular point. On the other hand, when the switching condition is satisfied, the position information of the user moving body or the virtual camera is set as the position information of the second virtual space, and the image of the first virtual space is drawn in addition to the image of the second virtual space. Thus, an image in which the image of the first virtual space is displayed in the region corresponding to the singular point is generated. In one aspect of the present invention, an image in which an image of the first virtual space is displayed in a region corresponding to a singular point when the line-of-sight direction of the virtual camera faces the opposite direction side of the first traveling direction. Will be generated. In this way, the user moving body or the virtual camera passes through the region corresponding to the singular point, so that the position of the user moving body or the virtual camera is changed from the position of the first virtual space to the position of the second virtual space. It will be switched. When the user moving body or the virtual camera is located in the first virtual space, the image of the second virtual space is drawn in addition to the image of the first virtual space, and the second virtual space is drawn in the region corresponding to the singular point. The image of the virtual space 2 is displayed. On the other hand, when the user moving body or the virtual camera is located in the second virtual space, the image of the first virtual space is drawn in addition to the image of the second virtual space, and the region corresponding to the singular point is drawn. An image of the first virtual space is displayed. This makes it possible to provide a simulation system or the like that can realize a virtual reality that can move in a plurality of virtual spaces.
また本発明の一態様では、前記表示処理部は、前記ユーザ移動体又は前記仮想カメラが、前記第1の進行方向とは異なる第2の進行方向で前記特異点を通過した場合には、前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第1の仮想空間の位置情報として設定し、前記第1の描画処理として、少なくとも前記第1の仮想空間の画像を描画する処理を行ってもよい。
In the aspect of the invention, the display processing unit may be configured such that when the user moving body or the virtual camera passes through the singular point in a second traveling direction different from the first traveling direction, The position information of the user moving body or the virtual camera is set as the position information of the first virtual space, and as the first drawing process, a process of drawing at least an image of the first virtual space is performed. Also good.
このようにすれば、ユーザ移動体又は仮想カメラが、第1の進行方向とは異なる第2の進行方向で、特異点に対応する領域を通過することで、ユーザ移動体又は仮想カメラの位置が第2の仮想空間の位置から第1の仮想空間の位置に切り替わるようになる。
In this way, the user moving body or the virtual camera passes through the region corresponding to the singular point in the second traveling direction different from the first traveling direction, so that the position of the user moving body or the virtual camera is changed. The position of the second virtual space is switched to the position of the first virtual space.
また本発明の一態様では、前記表示処理部は、前記ユーザ移動体又は前記仮想カメラが前記特異点に対応する場所を通過することで前記切替条件が成立した場合に、前記ユーザ移動体又は前記仮想カメラの前記位置情報を、第3の仮想空間の位置情報として設定し、第3の描画処理として、少なくとも前記第3の仮想空間の画像を描画する処理を行ってもよい。
Moreover, in one aspect of the present invention, the display processing unit, when the switching condition is satisfied by the user moving body or the virtual camera passing through a location corresponding to the singular point, the user moving body or the The position information of the virtual camera may be set as the position information of the third virtual space, and a process of drawing at least an image of the third virtual space may be performed as the third drawing process.
このようにすれば、第1、第2の仮想空間に加えて、第3の仮想空間を移動する仮想現実を実現できるシミュレーションシステム等の提供が可能になる。
This makes it possible to provide a simulation system or the like that can realize a virtual reality that moves in the third virtual space in addition to the first and second virtual spaces.
また本発明の一態様では、前記表示処理部は、複数の前記ユーザがプレイする場合に、複数の前記ユーザに対応する複数の前記ユーザ移動体又は複数の前記仮想カメラが前記特異点に対応する場所を通過して前記第1の仮想空間に戻ったことを条件に、前記第3の描画処理を許可してもよい。
In the aspect of the invention, the display processing unit may correspond to the singular points, when the plurality of users play, the plurality of user moving bodies corresponding to the plurality of users or the plurality of virtual cameras. The third drawing process may be permitted on condition that the vehicle passes through a place and returns to the first virtual space.
このようにすれば、複数のユーザがプレイする場合に、一部のユーザが第2の仮想空間に取り残されてしまうような事態の発生を防止できる。
In this way, when a plurality of users play, some users can be prevented from being left in the second virtual space.
また本発明の一態様では、前記表示処理部は、前記ユーザの入力情報又はセンサの検出情報に基づいて、前記切替条件が成立したか否かを判断してもよい。
In one aspect of the present invention, the display processing unit may determine whether or not the switching condition is satisfied based on the input information of the user or the detection information of the sensor.
このようにすれば、ユーザの入力情報やセンサの検出情報に基づいて仮想空間を切り替える処理などを実現できるようになる。
In this way, it is possible to realize a process of switching the virtual space based on user input information or sensor detection information.
また本発明の一態様では、前記ユーザが移動する実空間のプレイフィールドには、前記特異点に対応する物体が配置されており、前記表示処理部は、前記ユーザが前記実空間において前記物体の場所を通過した場合に、前記ユーザ移動体又は前記仮想カメラが、前記特異点に対応する場所を通過したと判断してもよい。
In one aspect of the present invention, an object corresponding to the singular point is arranged in a play field in the real space in which the user moves, and the display processing unit is configured to allow the user to move the object in the real space. When passing through a place, it may be determined that the user moving body or the virtual camera has passed through a place corresponding to the singular point.
このようにすれば、実空間において、特異点に対応する物体の場所を実際に通過することで、対応するユーザ移動体又は仮想カメラが、特異点の場所を通過したと判断されるようになるため、ユーザの仮想現実感を向上できる。
In this way, by actually passing the location of the object corresponding to the singular point in the real space, it is determined that the corresponding user moving body or virtual camera has passed the location of the singular point. Therefore, the virtual reality of the user can be improved.
また本発明の一態様では、前記表示処理部は、前記特異点を設定又は非設定にする処理を行ってもよい。
In one aspect of the present invention, the display processing unit may perform processing for setting or not setting the singular point.
このようにすれば、特異点の設定、非設定を切り替えることが可能になり、より多様なゲーム処理やゲーム演出などを実現できるようになる。
This makes it possible to switch between setting and non-setting of singular points, and to realize more various game processing and game effects.
また本発明の一態様では、前記ユーザに仮想現実を体験させるための体感装置を制御する体感装置制御部を含み(体感装置制御部としてコンピュータを機能させ)、前記体感装置制御部は、前記切替条件が成立していない場合の前記体感装置の制御と、前記切替条件が成立した場合の前記体感装置の制御を異ならせてもよい。
Moreover, in one aspect of the present invention, it includes a sensation device control unit that controls a sensation device for allowing the user to experience virtual reality (a computer functions as the sensation device control unit), and the sensation device control unit includes the switching The control of the sensation apparatus when the condition is not satisfied may be different from the control of the sensation apparatus when the switching condition is satisfied.
このようにすれば、仮想空間が切り替わったことを、体感装置を用いてユーザに効果的に体感させることが可能になる。
This makes it possible for the user to effectively experience the fact that the virtual space has been switched using the sensation apparatus.
また本発明の一態様では、複数の前記ユーザがプレイする場合に、実空間でのユーザ間の衝突の報知情報の出力処理を行う報知処理部を含んでもよい(報知処理部としてコンピュータを機能させてもよい)。
Moreover, in one aspect of the present invention, when a plurality of users play, a notification processing unit that performs a process of outputting notification information of collision between users in real space may be included (a computer is caused to function as the notification processing unit). May be)
このようにすれば、例えば、仮想空間では衝突が発生していないのに、実空間においてユーザ間の衝突が発生するような状況において、ユーザ間の衝突の発生の可能性をユーザに対して効果的に報知できるようになる。
In this way, for example, in a situation where a collision between users occurs in the real space even though no collision occurs in the virtual space, the possibility of the occurrence of the collision between the users is effective for the user. Can be informed.
また本発明の一態様では、実空間での前記ユーザの位置情報を取得する情報取得部を含み(情報取得部としてコンピュータを機能させ)、前記移動体処理部は、取得された前記位置情報に基づいて、前記ユーザ移動体を移動させる処理を行い、前記表示処理部は、前記ユーザが装着する頭部装着型表示装置の表示画像を生成し、前記表示処理部は、前記切替条件が成立する前においては、前記実空間での前記ユーザの前記位置情報により特定される前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第1の仮想空間の位置情報として設定し、前記切替条件が成立した場合には、前記実空間での前記ユーザの前記位置情報により特定される前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第2の仮想空間の位置情報として設定してもよい。
Moreover, in one mode of the present invention, an information acquisition unit that acquires position information of the user in real space is included (a computer is caused to function as the information acquisition unit), and the moving body processing unit includes the acquired position information. The display processing unit generates a display image of a head-mounted display device worn by the user, and the display processing unit satisfies the switching condition. Before, the position information of the user moving body or the virtual camera specified by the position information of the user in the real space is set as the position information of the first virtual space, and the switching condition is If established, the position information of the user moving object or the virtual camera specified by the position information of the user in the real space is used as the position information of the second virtual space. It may be set Te.
このようにすれば、実空間でのユーザの位置情報が取得され、取得された位置情報に基づいて、仮想空間においてユーザ移動体等を移動するようになる。そして切替条件が成立すると、ユーザ移動体又は仮想カメラの位置が第1の仮想空間の位置から第2の仮想空間の位置に切り替わるようになり、複数の仮想空間を移動する仮想現実感を更に向上することが可能になる。
In this way, the position information of the user in the real space is acquired, and the user moving body or the like is moved in the virtual space based on the acquired position information. When the switching condition is satisfied, the position of the user moving body or the virtual camera is switched from the position of the first virtual space to the position of the second virtual space, further improving the virtual reality that moves in a plurality of virtual spaces. It becomes possible to do.
また本発明の一態様は、実空間でのユーザの位置情報を取得する情報取得部と、オブジェクトが配置される仮想空間の設定処理を行う仮想空間設定部と、取得された前記位置情報に基づいて、前記仮想空間において、ユーザに対応するユーザ移動体を移動させる処理を行う移動体処理部と、前記仮想空間において仮想カメラから見える画像の描画処理を行い、前記ユーザが装着する頭部装着型表示装置に表示する画像を生成する表示処理部と、を含み、前記仮想空間設定部は、前記仮想空間として、第1の仮想空間と、前記第1の仮想空間に対して特異点を介して結びつけられる第2の仮想空間とを設定し、前記表示処理部は、所与の切替条件が成立する前においては、前記実空間での前記ユーザの前記位置情報により特定される前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第1の仮想空間の位置情報として設定し、第1の描画処理として、少なくとも前記第1の仮想空間の画像を描画する処理を行い、前記ユーザ移動体又は前記仮想カメラが前記特異点に対応する場所を第1の進行方向で通過することで前記切替条件が成立した場合には、前記実空間での前記ユーザの前記位置情報により特定される前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第2の仮想空間の位置情報として設定し、第2の描画処理として、少なくとも前記第2の仮想空間の画像を描画する処理を行うシミュレーションシステムに関係する。また本発明は、上記各部としてコンピュータを機能させるプログラム、又は該プログラムを記憶したコンピュータ読み取り可能な情報記憶媒体に関係する。
One embodiment of the present invention is based on an information acquisition unit that acquires position information of a user in real space, a virtual space setting unit that performs setting processing of a virtual space in which an object is arranged, and the acquired position information. A moving body processing unit that performs a process of moving a user moving body corresponding to the user in the virtual space, and a head-mounted type that performs a drawing process of an image viewed from a virtual camera in the virtual space and is worn by the user A display processing unit configured to generate an image to be displayed on a display device, wherein the virtual space setting unit includes, as the virtual space, a first virtual space and a singular point with respect to the first virtual space. A second virtual space to be associated with each other, and the display processing unit sets the user specified by the position information of the user in the real space before a given switching condition is satisfied. The position information of the moving body or the virtual camera is set as the position information of the first virtual space, and as the first drawing process, at least an image of the first virtual space is drawn, and the user When the moving condition or the virtual camera passes through the place corresponding to the singular point in the first traveling direction and the switching condition is satisfied, it is specified by the position information of the user in the real space. Simulation that sets the position information of the user moving body or the virtual camera as position information of the second virtual space and performs a process of drawing at least an image of the second virtual space as a second drawing process. Related to the system. The present invention also relates to a program that causes a computer to function as each of the above-described units, or a computer-readable information storage medium that stores the program.
本発明の一態様によれば、実空間でのユーザの位置情報が取得され、取得された位置情報に基づいて、仮想空間においてユーザ移動体等を移動するようになる。そして切替条件が成立すると、ユーザ移動体又は仮想カメラの位置が、第1の仮想空間の位置から第2の仮想空間の位置に切り替わるようになる。これにより、複数の仮想空間を移動可能な仮想現実を実現できるシミュレーションシステム等の提供が可能になる。
According to one aspect of the present invention, user position information in real space is acquired, and a user moving body or the like is moved in the virtual space based on the acquired position information. When the switching condition is satisfied, the position of the user moving body or the virtual camera is switched from the position of the first virtual space to the position of the second virtual space. This makes it possible to provide a simulation system or the like that can realize a virtual reality that can move in a plurality of virtual spaces.
また本発明の他の態様は、オブジェクトが配置される仮想空間を設定する仮想空間設定処理と、前記仮想空間において、ユーザに対応するユーザ移動体を移動させる移動体処理と、前記仮想空間において仮想カメラから見える画像の描画処理を行う表示処理と、を行い、前記仮想空間設定処理において、前記仮想空間として、第1の仮想空間と、前記第1の仮想空間に対して特異点を介して結びつけられる第2の仮想空間とを設定し、前記表示処理において、所与の切替条件が成立する前は、前記ユーザ移動体又は前記仮想カメラの位置情報を、前記第1の仮想空間の位置情報として設定し、第1の描画処理として、前記第1の仮想空間の画像に加えて前記第2の仮想空間の画像を描画する処理を行って、前記特異点に対応する領域に前記第2の仮想空間の画像が表示される画像を生成し、前記ユーザ移動体又は前記仮想カメラが前記特異点に対応する場所を第1の進行方向で通過することで前記切替条件が成立した場合には、前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第2の仮想空間の位置情報として設定し、第2の描画処理として、前記第2の仮想空間の画像に加えて前記第1の仮想空間の画像を描画する処理を行って、前記特異点に対応する領域に前記第1の仮想空間の画像が表示される画像を生成し、前記表示処理において、前記仮想カメラの視線方向が、前記第1の進行方向の反対方向側を向いた場合に、前記特異点に対応する領域に前記第1の仮想空間の画像が表示される画像を生成する画像処理方法に関係する。
According to another aspect of the present invention, there is provided a virtual space setting process for setting a virtual space in which an object is arranged, a moving body process for moving a user moving body corresponding to a user in the virtual space, and a virtual space in the virtual space. Display processing for rendering an image visible from the camera, and in the virtual space setting processing, the virtual space is connected to the first virtual space and the first virtual space via a singular point The position information of the user moving body or the virtual camera is used as the position information of the first virtual space before a given switching condition is satisfied in the display process. Set and perform a process of drawing the image of the second virtual space in addition to the image of the first virtual space as the first drawing process, and the region corresponding to the singular point When an image in which an image of the virtual space of 2 is displayed is generated and the user moving body or the virtual camera passes through the place corresponding to the singular point in the first traveling direction, the switching condition is satisfied. Sets the position information of the user moving body or the virtual camera as the position information of the second virtual space, and in addition to the image of the second virtual space as the second drawing process, the first information To generate an image in which the image of the first virtual space is displayed in an area corresponding to the singular point. In the display process, the line-of-sight direction of the virtual camera is The present invention relates to an image processing method for generating an image in which an image of the first virtual space is displayed in a region corresponding to the singular point when facing in the direction opposite to the first traveling direction.
また本発明の他の態様は、実空間でのユーザの位置情報を取得する情報取得処理と、オブジェクトが配置される仮想空間を設定する仮想空間設定処理と、取得された前記位置情報に基づいて、前記仮想空間において、ユーザに対応するユーザ移動体を移動させる移動体処理と、前記仮想空間において仮想カメラから見える画像の描画処理を行い、前記ユーザが装着する頭部装着型表示装置に表示する画像を生成する表示処理と、を行い、前記仮想空間設定処理において、前記仮想空間として、第1の仮想空間と、前記第1の仮想空間に対して特異点を介して結びつけられる第2の仮想空間とを設定し、前記表示処理において、所与の切替条件が成立する前は、前記実空間での前記ユーザの前記位置情報により特定される前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第1の仮想空間の位置情報として設定し、第1の描画処理として、少なくとも前記第1の仮想空間の画像を描画する処理を行い、前記ユーザ移動体又は前記仮想カメラが前記特異点に対応する場所を第1の進行方向で通過することで前記切替条件が成立した場合には、前記実空間での前記ユーザの前記位置情報により特定される前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第2の仮想空間の位置情報として設定し、第2の描画処理として、少なくとも前記第2の仮想空間の画像を描画する処理を行う画像処理方法に関係する。
Another aspect of the present invention is based on information acquisition processing for acquiring user location information in real space, virtual space setting processing for setting a virtual space in which an object is placed, and the acquired location information. In the virtual space, a moving body process for moving a user moving body corresponding to the user and a drawing process for an image visible from the virtual camera in the virtual space are performed and displayed on the head-mounted display device worn by the user. Display processing for generating an image, and in the virtual space setting processing, as the virtual space, a first virtual space and a second virtual linked to the first virtual space via a singular point In the display process, before the given switching condition is satisfied, the user moving body or the user specified by the position information of the user in the real space is set. The position information of the virtual camera is set as the position information of the first virtual space, and as a first drawing process, a process of drawing at least an image of the first virtual space is performed, and the user moving object or The user movement specified by the position information of the user in the real space when the switching condition is satisfied when the virtual camera passes through a place corresponding to the singular point in a first traveling direction. An image processing method for setting the position information of a body or the virtual camera as position information of the second virtual space and performing at least a process of drawing an image of the second virtual space as a second drawing process Involved.
以下、本実施形態について説明する。なお、以下に説明する本実施形態は、請求の範囲に記載された本発明の内容を不当に限定するものではない。また本実施形態で説明される構成の全てが、本発明の必須構成要件であるとは限らない。
Hereinafter, this embodiment will be described. In addition, this embodiment demonstrated below does not unduly limit the content of this invention described in the claim. In addition, all the configurations described in the present embodiment are not necessarily essential configuration requirements of the present invention.
1.シミュレーションシステム
図1は、本実施形態のシミュレーションシステム(シミュレータ、ゲームシステム、画像生成システム)の構成例を示すブロック図である。本実施形態のシミュレーションシステムは例えばバーチャルリアリティ(VR)をシミュレートするシステムであり、ゲームコンテンツを提供するゲームシステム、スポーツ競技シミュレータや運転シミュレータなどのリアルタイムシミュレーションシステム、SNSのサービスを提供するシステム、映像等のコンテンツを提供するコンテンツ提供システム、或いは遠隔作業を実現するオペレーティングシステムなどの種々のシステムに適用可能である。なお、本実施形態のシミュレーションシステムは図1の構成に限定されず、その構成要素(各部)の一部を省略したり、他の構成要素を追加するなどの種々の変形実施が可能である。 1. Simulation System FIG. 1 is a block diagram illustrating a configuration example of a simulation system (a simulator, a game system, and an image generation system) according to the present embodiment. The simulation system of this embodiment is a system that simulates virtual reality (VR), for example, a game system that provides game content, a real-time simulation system such as a sports competition simulator and a driving simulator, a system that provides SNS services, and video The present invention can be applied to various systems such as a content providing system that provides content such as an operating system that implements remote work. Note that the simulation system of the present embodiment is not limited to the configuration shown in FIG. 1, and various modifications such as omitting some of the components (each unit) or adding other components are possible.
図1は、本実施形態のシミュレーションシステム(シミュレータ、ゲームシステム、画像生成システム)の構成例を示すブロック図である。本実施形態のシミュレーションシステムは例えばバーチャルリアリティ(VR)をシミュレートするシステムであり、ゲームコンテンツを提供するゲームシステム、スポーツ競技シミュレータや運転シミュレータなどのリアルタイムシミュレーションシステム、SNSのサービスを提供するシステム、映像等のコンテンツを提供するコンテンツ提供システム、或いは遠隔作業を実現するオペレーティングシステムなどの種々のシステムに適用可能である。なお、本実施形態のシミュレーションシステムは図1の構成に限定されず、その構成要素(各部)の一部を省略したり、他の構成要素を追加するなどの種々の変形実施が可能である。 1. Simulation System FIG. 1 is a block diagram illustrating a configuration example of a simulation system (a simulator, a game system, and an image generation system) according to the present embodiment. The simulation system of this embodiment is a system that simulates virtual reality (VR), for example, a game system that provides game content, a real-time simulation system such as a sports competition simulator and a driving simulator, a system that provides SNS services, and video The present invention can be applied to various systems such as a content providing system that provides content such as an operating system that implements remote work. Note that the simulation system of the present embodiment is not limited to the configuration shown in FIG. 1, and various modifications such as omitting some of the components (each unit) or adding other components are possible.
操作部160は、ユーザ(プレーヤ)が種々の操作情報(入力情報)を入力するためのものである。操作部160は、例えば操作ボタン、方向指示キー、ジョイスティック、ハンドル、ペダル、レバー又は音声入力装置等の種々の操作デバイスにより実現できる。
The operation unit 160 is for a user (player) to input various operation information (input information). The operation unit 160 can be realized by various operation devices such as an operation button, a direction instruction key, a joystick, a handle, a pedal, a lever, or a voice input device.
記憶部170は各種の情報を記憶する。記憶部170は、処理部100や通信部196などのワーク領域として機能する。ゲームプログラムや、ゲームプログラムの実行に必要なゲームデータは、この記憶部170に保持される。記憶部170の機能は、半導体メモリ(DRAM、VRAM)、HDD(ハードディスクドライブ)、SSD、光ディスク装置などにより実現できる。記憶部170は、オブジェクト情報記憶部172、描画バッファ178を含む。
The storage unit 170 stores various types of information. The storage unit 170 functions as a work area such as the processing unit 100 or the communication unit 196. The game program and game data necessary for executing the game program are held in the storage unit 170. The function of the storage unit 170 can be realized by a semiconductor memory (DRAM, VRAM), HDD (Hard Disk Drive), SSD, optical disk device, or the like. The storage unit 170 includes an object information storage unit 172 and a drawing buffer 178.
情報記憶媒体180(コンピュータにより読み取り可能な媒体)は、プログラムやデータなどを格納するものであり、その機能は、光ディスク(DVD、BD、CD)、HDD、或いは半導体メモリ(ROM)などにより実現できる。処理部100は、情報記憶媒体180に格納されるプログラム(データ)に基づいて本実施形態の種々の処理を行う。即ち情報記憶媒体180には、本実施形態の各部としてコンピュータ(入力装置、処理部、記憶部、出力部を備える装置)を機能させるためのプログラム(各部の処理をコンピュータに実行させるためのプログラム)が記憶される。
The information storage medium 180 (a computer-readable medium) stores programs, data, and the like, and its function can be realized by an optical disk (DVD, BD, CD), HDD, semiconductor memory (ROM), or the like. . The processing unit 100 performs various processes of the present embodiment based on a program (data) stored in the information storage medium 180. That is, in the information storage medium 180, a program for causing a computer (an apparatus including an input device, a processing unit, a storage unit, and an output unit) to function as each unit of the present embodiment (a program for causing the computer to execute processing of each unit). Is memorized.
HMD200(頭部装着型表示装置)は、ユーザの頭部に装着されて、ユーザの眼前に画像を表示する装置である。HMD200は非透過型であることが望ましいが、透過型であってもよい。またHMD200は、いわゆるメガネタイプのHMDであってもよい。
The HMD 200 (head-mounted display device) is a device that is mounted on the user's head and displays an image in front of the user's eyes. The HMD 200 is preferably a non-transmissive type, but may be a transmissive type. The HMD 200 may be a so-called glasses-type HMD.
HMD200は、センサ部210、表示部220、処理部240を含む。なおHMD200に発光素子を設ける変形実施も可能である。センサ部210は、例えばヘッドトラッキングなどのトラッキング処理を実現するためものである。例えばセンサ部210を用いたトラッキング処理により、HMD200の位置、方向を特定する。HMD200の位置、方向が特定されることで、ユーザの視点位置、視線方向を特定できる。
The HMD 200 includes a sensor unit 210, a display unit 220, and a processing unit 240. A modification in which a light emitting element is provided in the HMD 200 is also possible. The sensor unit 210 is for realizing tracking processing such as head tracking, for example. For example, the position and direction of the HMD 200 are specified by tracking processing using the sensor unit 210. By specifying the position and direction of the HMD 200, the user's viewpoint position and line-of-sight direction can be specified.
トラッキング方式としては種々の方式を採用できる。トラッキング方式の一例である第1のトラッキング方式では、後述の図2(A)、図2(B)で詳細に説明するように、センサ部210として複数の受光素子(フォトダイオード等)を設ける。そして外部に設けられた発光素子(LED等)からの光(レーザー等)をこれらの複数の受光素子により受光することで、現実世界の3次元空間でのHMD200(ユーザの頭部)の位置、方向を特定する。第2のトラッキング方式では、後述の図3(A)、図3(B)で詳細に説明するように、複数の発光素子(LED)をHMD200に設ける。そして、これらの複数の発光素子からの光を、外部に設けられた撮像部で撮像することで、HMD200の位置、方向を特定する。第3のトラッキング方式では、センサ部210としてモーションセンサを設け、このモーションセンサを用いてHMD200の位置、方向を特定する。モーションセンサは例えば加速度センサやジャイロセンサなどにより実現できる。例えば3軸の加速度センサと3軸のジャイロセンサを用いた6軸のモーションセンサを用いることで、現実世界の3次元空間でのHMD200の位置、方向を特定できる。なお、第1のトラッキング方式と第2のトラッキング方式の組合わせ、或いは第1のトラッキング方式と第3のトラッキング方式の組合わせなどにより、HMD200の位置、方向を特定してもよい。またHMD200の位置、方向を特定することでユーザの視点位置、視線方向を特定するのではなく、ユーザの視点位置、視線方向を直接に特定するトラッキング処理を採用してもよい。
A variety of tracking methods can be used. In the first tracking method, which is an example of the tracking method, a plurality of light receiving elements (photodiodes and the like) are provided as the sensor unit 210, as will be described in detail with reference to FIGS. 2A and 2B described later. Then, by receiving light (laser or the like) from a light emitting element (LED or the like) provided outside by the plurality of light receiving elements, the position of the HMD 200 (user's head) in the real world three-dimensional space, Identify the direction. In the second tracking method, a plurality of light emitting elements (LEDs) are provided in the HMD 200 as will be described in detail with reference to FIGS. 3A and 3B described later. And the position and direction of HMD200 are pinpointed by imaging the light from these light emitting elements with the imaging part provided outside. In the third tracking method, a motion sensor is provided as the sensor unit 210, and the position and direction of the HMD 200 are specified using this motion sensor. The motion sensor can be realized by, for example, an acceleration sensor or a gyro sensor. For example, by using a 6-axis motion sensor using a 3-axis acceleration sensor and a 3-axis gyro sensor, the position and direction of the HMD 200 in a three-dimensional space in the real world can be specified. Note that the position and direction of the HMD 200 may be specified by a combination of the first tracking method and the second tracking method, or a combination of the first tracking method and the third tracking method. Further, instead of specifying the user's viewpoint position and line-of-sight direction by specifying the position and direction of the HMD 200, tracking processing that directly specifies the user's viewpoint position and line-of-sight direction may be employed.
HMD200の表示部220は例えば有機ELディスプレイ(OEL)や液晶ディスプレイ(LCD)などにより実現できる。例えばHMD200の表示部220には、ユーザの左眼の前に設定される第1のディスプレイ又は第1の表示領域と、右眼の前に設定される第2のディスプレイ又は第2の表示領域が設けられており、立体視表示が可能になっている。立体視表示を行う場合には、例えば視差が異なる左眼用画像と右眼用画像を生成し、第1のディスプレイに左眼用画像を表示し、第2のディスプレイに右眼用画像を表示する。或いは1つのディスプレイの第1の表示領域に左眼用画像を表示し、第2の表示領域に右眼用画像を表示する。またHMD200には左眼用、右眼用の2つの接眼レンズ(魚眼レンズ)が設けられており、これによりユーザの視界の全周囲に亘って広がるVR空間が表現される。そして接眼レンズ等の光学系で生じる歪みを補正するための補正処理が、左眼用画像、右眼用画像に対して行われる。この補正処理は表示処理部120が行う。
The display unit 220 of the HMD 200 can be realized by, for example, an organic EL display (OEL) or a liquid crystal display (LCD). For example, the display unit 220 of the HMD 200 includes a first display or first display area set in front of the user's left eye and a second display or second display area set in front of the right eye. It is provided and stereoscopic display is possible. When performing stereoscopic display, for example, a left-eye image and a right-eye image with different parallax are generated, a left-eye image is displayed on the first display, and a right-eye image is displayed on the second display. To do. Alternatively, the left-eye image is displayed in the first display area of one display, and the right-eye image is displayed in the second display area. The HMD 200 is provided with two eyepiece lenses (fisheye lenses) for the left eye and the right eye, thereby expressing a VR space that extends over the entire perimeter of the user's field of view. Then, correction processing for correcting distortion generated in an optical system such as an eyepiece is performed on the left-eye image and the right-eye image. This correction process is performed by the display processing unit 120.
HMD200の処理部240は、HMD200において必要な各種の処理を行う。例えば処理部240は、センサ部210の制御処理や表示部220の表示制御処理などを行う。また処理部240が、3次元音響(立体音響)処理を行って、3次元的な音の方向や距離や広がりの再現を実現してもよい。
The processing unit 240 of the HMD 200 performs various processes necessary for the HMD 200. For example, the processing unit 240 performs control processing of the sensor unit 210, display control processing of the display unit 220, and the like. Further, the processing unit 240 may perform a three-dimensional sound (stereoscopic sound) process to realize reproduction of a three-dimensional sound direction, distance, and spread.
なお、図1ではシミュレーションシステムの表示部がHMD200である場合の例を示しているが、シミュレーションシステムの表示部はHMD以外のタイプの表示部であってもよい。例えばシミュレーションシステムの表示部は、例えば業務用ゲーム装置におけるディスプレイ(通常の2Dモニターやドームスクリーン)や、家庭用ゲーム装置におけるテレビや、パーソナルコンピュータ(PC)におけるディスプレイなどであってもよい。
In addition, although the example in case the display part of a simulation system is HMD200 is shown in FIG. 1, the display part of a simulation system may be a display part of types other than HMD. For example, the display unit of the simulation system may be, for example, a display (ordinary 2D monitor or dome screen) in an arcade game device, a television in a home game device, or a display in a personal computer (PC).
音出力部192は、本実施形態により生成された音を出力するものであり、例えばスピーカ又はヘッドホン等により実現できる。
The sound output unit 192 outputs the sound generated by the present embodiment, and can be realized by, for example, a speaker or headphones.
I/F(インターフェース)部194は、携帯型情報記憶媒体195とのインターフェース処理を行うものであり、その機能はI/F処理用のASICなどにより実現できる。携帯型情報記憶媒体195は、ユーザが各種の情報を保存するためのものであり、電源が非供給になった場合にもこれらの情報の記憶を保持する記憶装置である。携帯型情報記憶媒体195は、ICカード(メモリカード)、USBメモリ、或いは磁気カードなどにより実現できる。
The I / F (interface) unit 194 performs interface processing with the portable information storage medium 195, and its function can be realized by an ASIC for I / F processing or the like. The portable information storage medium 195 is for a user to save various types of information, and is a storage device that retains storage of such information even when power is not supplied. The portable information storage medium 195 can be realized by an IC card (memory card), a USB memory, a magnetic card, or the like.
通信部196は、有線や無線のネットワークを介して外部(他の装置)との間で通信を行うものであり、その機能は、通信用ASIC又は通信用プロセッサなどのハードウェアや、通信用ファームウェアにより実現できる。
The communication unit 196 communicates with the outside (another apparatus) via a wired or wireless network, and functions thereof are hardware such as a communication ASIC or communication processor, or communication firmware. Can be realized.
なお本実施形態の各部としてコンピュータを機能させるためのプログラム(データ)は、サーバ(ホスト装置)が有する情報記憶媒体からネットワーク及び通信部196を介して情報記憶媒体180(あるいは記憶部170)に配信してもよい。このようなサーバ(ホスト装置)による情報記憶媒体の使用も本発明の範囲内に含めることができる。
A program (data) for causing a computer to function as each unit of this embodiment is distributed from the information storage medium of the server (host device) to the information storage medium 180 (or storage unit 170) via the network and communication unit 196. May be. Use of an information storage medium by such a server (host device) can also be included in the scope of the present invention.
処理部100(プロセッサ)は、操作部160からの操作情報や、HMD200でのトラッキング情報(HMDの位置及び方向の少なくとも一方の情報。視点位置及び視線方向の少なくとも一方の情報)や、プログラムなどに基づいて、ゲーム処理(シミュレーション処理)、仮想空間設定処理、移動体処理、仮想カメラ制御処理、表示処理、或いは音処理などを行う。
The processing unit 100 (processor) is used for operation information from the operation unit 160, tracking information in the HMD 200 (information on at least one of the position and direction of the HMD, information on at least one of the viewpoint position and the line-of-sight direction), a program, and the like. Based on this, game processing (simulation processing), virtual space setting processing, moving body processing, virtual camera control processing, display processing, sound processing, or the like is performed.
処理部100の各部が行う本実施形態の各処理(各機能)はプロセッサ(ハードウェアを含むプロセッサ)により実現できる。例えば本実施形態の各処理は、プログラム等の情報に基づき動作するプロセッサと、プログラム等の情報を記憶するメモリにより実現できる。プロセッサは、例えば各部の機能が個別のハードウェアで実現されてもよいし、或いは各部の機能が一体のハードウェアで実現されてもよい。例えば、プロセッサはハードウェアを含み、そのハードウェアは、デジタル信号を処理する回路及びアナログ信号を処理する回路の少なくとも一方を含むことができる。例えば、プロセッサは、回路基板に実装された1又は複数の回路装置(例えばIC等)や、1又は複数の回路素子(例えば抵抗、キャパシター等)で構成することもできる。プロセッサは、例えばCPU(Central Processing Unit)であってもよい。但し、プロセッサはCPUに限定されるものではなく、GPU(Graphics Processing Unit)、或いはDSP(Digital Signal Processor)等、各種のプロセッサを用いることが可能である。またプロセッサはASICによるハードウェア回路であってもよい。またプロセッサは、アナログ信号を処理するアンプ回路やフィルター回路等を含んでもよい。メモリ(記憶部170)は、SRAM、DRAM等の半導体メモリであってもよいし、レジスターであってもよい。或いはハードディスク装置(HDD)等の磁気記憶装置であってもよいし、光学ディスク装置等の光学式記憶装置であってもよい。例えば、メモリはコンピュータにより読み取り可能な命令を格納しており、当該命令がプロセッサにより実行されることで、処理部100の各部の処理(機能)が実現されることになる。ここでの命令は、プログラムを構成する命令セットでもよいし、プロセッサのハードウェア回路に対して動作を指示する命令であってもよい。
Each process (each function) of this embodiment performed by each unit of the processing unit 100 can be realized by a processor (a processor including hardware). For example, each process of the present embodiment can be realized by a processor that operates based on information such as a program and a memory that stores information such as a program. In the processor, for example, the function of each unit may be realized by individual hardware, or the function of each unit may be realized by integrated hardware. For example, the processor may include hardware, and the hardware may include at least one of a circuit that processes a digital signal and a circuit that processes an analog signal. For example, the processor can be configured by one or a plurality of circuit devices (for example, ICs) mounted on a circuit board or one or a plurality of circuit elements (for example, resistors, capacitors, etc.). The processor may be, for example, a CPU (Central Processing Unit). However, the processor is not limited to the CPU, and various processors such as a GPU (GraphicsGProcessing Unit) or a DSP (Digital Signal Processor) can be used. The processor may be an ASIC hardware circuit. The processor may include an amplifier circuit, a filter circuit, and the like that process an analog signal. The memory (storage unit 170) may be a semiconductor memory such as SRAM or DRAM, or may be a register. Alternatively, it may be a magnetic storage device such as a hard disk device (HDD) or an optical storage device such as an optical disk device. For example, the memory stores instructions that can be read by a computer, and the processing (function) of each unit of the processing unit 100 is realized by executing the instructions by the processor. The instruction here may be an instruction set constituting a program, or an instruction for instructing an operation to the hardware circuit of the processor.
処理部100は、入力処理部102、演算処理部110、出力処理部140を含む。演算処理部110は、情報取得部111、仮想空間設定部112、移動体処理部113、仮想カメラ制御部114、ゲーム処理部115、報知処理部116、体感装置制御部117、表示処理部120、音処理部130を含む。上述したように、これらの各部により実行される本実施形態の各処理は、プロセッサ(或いはプロセッサ及びメモリ)により実現できる。なお、これらの構成要素(各部)の一部を省略したり、他の構成要素を追加するなどの種々の変形実施が可能である。
The processing unit 100 includes an input processing unit 102, an arithmetic processing unit 110, and an output processing unit 140. The arithmetic processing unit 110 includes an information acquisition unit 111, a virtual space setting unit 112, a moving body processing unit 113, a virtual camera control unit 114, a game processing unit 115, a notification processing unit 116, a sensation device control unit 117, a display processing unit 120, A sound processing unit 130 is included. As described above, each process of the present embodiment executed by these units can be realized by a processor (or a processor and a memory). Various modifications such as omitting some of these components (each unit) or adding other components are possible.
入力処理部102は、操作情報やトラッキング情報を受け付ける処理や、記憶部170から情報を読み出す処理や、通信部196を介して情報を受信する処理を、入力処理として行う。例えば入力処理部102は、操作部160を用いてユーザが入力した操作情報やHMD200のセンサ部210等により検出されたトラッキング情報を取得する処理や、読み出し命令で指定された情報を、記憶部170から読み出す処理や、外部装置(サーバ等)からネットワークを介して情報を受信する処理を、入力処理として行う。ここで受信処理は、通信部196に情報の受信を指示したり、通信部196が受信した情報を取得して記憶部170に書き込む処理などである。
The input processing unit 102 performs processing for receiving operation information and tracking information, processing for reading information from the storage unit 170, and processing for receiving information via the communication unit 196 as input processing. For example, the input processing unit 102 stores, in the storage unit 170, processing for acquiring operation information input by the user using the operation unit 160, tracking information detected by the sensor unit 210 of the HMD 200, and the information specified by the read command. A process for reading data from a computer or a process for receiving information from an external device (such as a server) via a network is performed as an input process. Here, the reception process includes a process of instructing the communication unit 196 to receive information, a process of acquiring information received by the communication unit 196, and writing the information in the storage unit 170, and the like.
演算処理部110は、各種の演算処理を行う。例えば情報取得処理、仮想空間設定処理、移動体処理、仮想カメラ制御処理、ゲーム処理(シミュレーション処理)、表示処理、或いは音処理などの演算処理を行う。
The arithmetic processing unit 110 performs various arithmetic processes. For example, arithmetic processing such as information acquisition processing, virtual space setting processing, moving body processing, virtual camera control processing, game processing (simulation processing), display processing, or sound processing is performed.
情報取得部111(情報取得処理のプログラムモジュール)は種々の情報の取得処理を行う。例えば情報取得部111は、HMD200を装着するユーザの位置情報などを取得する。情報取得部111はユーザの方向情報などを取得してもよい。
The information acquisition unit 111 (program module for information acquisition processing) performs various information acquisition processing. For example, the information acquisition unit 111 acquires position information of the user wearing the HMD 200 and the like. The information acquisition unit 111 may acquire user direction information and the like.
仮想空間設定部112(仮想空間設定処理のプログラムモジュール)は、オブジェクトが配置される仮想空間(オブジェクト空間)の設定処理を行う。例えば、移動体(人、ロボット、車、電車、飛行機、船、モンスター又は動物等)、マップ(地形)、建物、観客席、コース(道路)、樹木、壁、水面などの表示物を表す各種オブジェクト(ポリゴン、自由曲面又はサブディビジョンサーフェイスなどのプリミティブ面で構成されるオブジェクト)を仮想空間に配置設定する処理を行う。即ちワールド座標系でのオブジェクトの位置や回転角度(向き、方向と同義)を決定し、その位置(X、Y、Z)にその回転角度(X、Y、Z軸回りでの回転角度)でオブジェクトを配置する。具体的には、記憶部170のオブジェクト情報記憶部172には、仮想空間でのオブジェクト(パーツオブジェクト)の位置、回転角度、移動速度、移動方向等の情報であるオブジェクト情報がオブジェクト番号に対応づけて記憶される。仮想空間設定部112は、例えば各フレーム毎にこのオブジェクト情報を更新する処理などを行う。
The virtual space setting unit 112 (program module for virtual space setting processing) performs setting processing for a virtual space (object space) in which objects are arranged. For example, various objects representing display objects such as moving objects (people, robots, cars, trains, airplanes, ships, monsters, animals, etc.), maps (terrain), buildings, auditoriums, courses (roads), trees, walls, water surfaces, etc. A process of placing and setting an object (an object composed of a primitive surface such as a polygon, a free-form surface, or a subdivision surface) in a virtual space is performed. In other words, the position and rotation angle of the object in the world coordinate system (synonymous with direction and direction) are determined, and the rotation angle (rotation angle around the X, Y, and Z axes) is determined at that position (X, Y, Z). Arrange objects. Specifically, in the object information storage unit 172 of the storage unit 170, object information that is information such as the position, rotation angle, moving speed, and moving direction of an object (part object) in the virtual space is associated with the object number. Is memorized. The virtual space setting unit 112 performs a process of updating the object information for each frame, for example.
移動体処理部113(移動体処理のプログラムモジュール)は、仮想空間内で移動する移動体についての種々の処理を行う。例えば仮想空間(オブジェクト空間、ゲーム空間)において移動体を移動させる処理や、移動体を動作させる処理を行う。例えば移動体処理部113は、操作部160によりユーザが入力した操作情報や、取得されたトラッキング情報や、プログラム(移動・動作アルゴリズム)や、各種データ(モーションデータ)などに基づいて、移動体(モデルオブジェクト)を仮想空間内で移動させたり、移動体を動作(モーション、アニメーション)させる制御処理を行う。具体的には、移動体の移動情報(位置、回転角度、速度、或いは加速度)や動作情報(パーツオブジェクトの位置、或いは回転角度)を、1フレーム(例えば1/60秒)毎に順次求めるシミュレーション処理を行う。なおフレームは、移動体の移動・動作処理(シミュレーション処理)や画像生成処理を行う時間の単位である。移動体は、例えば実空間のユーザ(プレーヤ)に対応するユーザ移動体である。ユーザ移動体は、仮想空間の仮想ユーザ(仮想プレーヤ、アバター)や、或いは当該仮想ユーザが搭乗(操作)する搭乗移動体(操作移動体)などである。
The moving body processing unit 113 (moving body processing program module) performs various processes on the moving body moving in the virtual space. For example, processing for moving the moving body in virtual space (object space, game space) and processing for moving the moving body are performed. For example, the mobile object processing unit 113 is based on operation information input by the user through the operation unit 160, acquired tracking information, a program (movement / motion algorithm), various data (motion data), and the like. Control processing for moving the model object) in the virtual space or moving the moving object (motion, animation) is performed. Specifically, a simulation for sequentially obtaining movement information (position, rotation angle, speed, or acceleration) and motion information (part object position or rotation angle) of a moving body for each frame (for example, 1/60 second). Process. A frame is a unit of time for performing a moving / movement process (simulation process) and an image generation process of a moving object. The moving body is, for example, a user moving body corresponding to a user (player) in real space. The user moving body is a virtual user (virtual player, avatar) in the virtual space, or a boarding moving body (operation moving body) on which the virtual user is boarded (operated).
仮想カメラ制御部114(仮想カメラ制御処理のプログラムモジュール)は、仮想カメラの制御を行う。例えば、操作部160により入力されたユーザの操作情報やトラッキング情報などに基づいて、仮想カメラを制御する処理を行う。
The virtual camera control unit 114 (virtual camera control processing program module) controls the virtual camera. For example, a process for controlling the virtual camera is performed based on user operation information, tracking information, and the like input by the operation unit 160.
例えば仮想カメラ制御部114は、ユーザの一人称視点又は三人称視点として設定される仮想カメラの制御を行う。例えば仮想空間において移動するユーザ移動体の視点(一人称視点)に対応する位置に、仮想カメラを設定して、仮想カメラの視点位置や視線方向を設定することで、仮想カメラの位置(位置座標)や姿勢(回転軸回りでの回転角度)を制御する。或いは、ユーザ移動体に追従する視点(三人称視点)の位置に、仮想カメラを設定して、仮想カメラの視点位置や視線方向を設定することで、仮想カメラの位置や姿勢を制御する。
For example, the virtual camera control unit 114 controls the virtual camera set as the first-person viewpoint or the third-person viewpoint of the user. For example, by setting a virtual camera at a position corresponding to the viewpoint (first-person viewpoint) of a user moving body moving in the virtual space, and setting the viewpoint position and line-of-sight direction of the virtual camera, the position (positional coordinates) of the virtual camera And control the attitude (rotation angle around the rotation axis). Alternatively, the position and orientation of the virtual camera are controlled by setting the virtual camera at the position of the viewpoint (third-person viewpoint) following the user moving body and setting the viewpoint position and line-of-sight direction of the virtual camera.
例えば仮想カメラ制御部114は、視点トラッキングにより取得されたユーザの視点情報のトラッキング情報に基づいて、ユーザの視点変化に追従するように仮想カメラを制御する。例えば本実施形態では、ユーザの視点位置、視線方向の少なくとも1つである視点情報のトラッキング情報(視点トラッキング情報)が取得される。このトラッキング情報は、例えばHMD200のトラッキング処理を行うことで取得できる。そして仮想カメラ制御部114は、取得されたトラッキング情報(ユーザの視点位置及び視線方向の少なくとも一方の情報)に基づいて仮想カメラの視点位置、視線方向を変化させる。例えば、仮想カメラ制御部114は、実空間でのユーザの視点位置、視線方向の変化に応じて、仮想空間での仮想カメラの視点位置、視線方向(位置、姿勢)が変化するように、仮想カメラを設定する。このようにすることで、ユーザの視点情報のトラッキング情報に基づいて、ユーザの視点変化に追従するように仮想カメラを制御できる。
For example, the virtual camera control unit 114 controls the virtual camera so as to follow the change of the user's viewpoint based on the tracking information of the user's viewpoint information acquired by the viewpoint tracking. For example, in the present embodiment, tracking information (viewpoint tracking information) of viewpoint information that is at least one of the user's viewpoint position and line-of-sight direction is acquired. This tracking information can be acquired by performing tracking processing of the HMD 200, for example. Then, the virtual camera control unit 114 changes the viewpoint position and the line-of-sight direction of the virtual camera based on the acquired tracking information (information on at least one of the user's viewpoint position and the line-of-sight direction). For example, the virtual camera control unit 114 is configured so that the viewpoint position and the line-of-sight direction (position and posture) of the virtual camera change in the virtual space according to changes in the viewpoint position and the line-of-sight direction of the user in the real space. Set up the camera. By doing in this way, a virtual camera can be controlled to follow a user's viewpoint change based on tracking information of a user's viewpoint information.
ゲーム処理部115(ゲーム処理のプログラムモジュール)は、ユーザがゲームをプレイするための種々のゲーム処理を行う。別の言い方をすれば、ゲーム処理部115(シミュレーション処理部)は、ユーザが仮想現実(バーチャルリアリティ)を体験するための種々のシミュレーション処理を実行する。ゲーム処理は、例えば、ゲーム開始条件が満たされた場合にゲームを開始する処理、開始したゲームを進行させる処理、ゲーム終了条件が満たされた場合にゲームを終了する処理、或いはゲーム成績を演算する処理などである。
The game processor 115 (game processing program module) performs various game processes for the user to play the game. In other words, the game processing unit 115 (simulation processing unit) executes various simulation processes for the user to experience virtual reality (virtual reality). The game process is, for example, a process for starting a game when a game start condition is satisfied, a process for advancing the started game, a process for ending a game when a game end condition is satisfied, or calculating a game result. Processing.
報知処理部116(報知処理のプログラムモジュール)は各種の報知処理を行う。例えばユーザに対する警告の報知処理などを行う。報知処理は、例えば画像や音による報知処理であってもよいし、振動デバイスや音響や空気砲などの体感装置を用いた報知処理であってもよい。体感装置制御部117(体感装置制御処理のプログラムモジュール)は、体感装置の各種の制御処理を行う。例えばユーザに仮想現実を体験させるための体感装置の制御を行う。
The notification processing unit 116 (program module for notification processing) performs various types of notification processing. For example, a warning notification process for the user is performed. The notification process may be a notification process using an image or sound, for example, or may be a notification process using a sensation device such as a vibration device, sound, or an air cannon. The sensation device control unit 117 (program module for sensation device control processing) performs various control processes of the sensation device. For example, the sensation device is controlled to allow the user to experience virtual reality.
表示処理部120(表示処理のプログラムモジュール)は、ゲーム画像(シミュレーション画像)の表示処理を行う。例えば処理部100で行われる種々の処理(ゲーム処理、シミュレーション処理)の結果に基づいて描画処理を行い、これにより画像を生成し、表示部220に表示する。具体的には、座標変換(ワールド座標変換、カメラ座標変換)、クリッピング処理、透視変換、或いは光源処理等のジオメトリ処理が行われ、その処理結果に基づいて、描画データ(プリミティブ面の頂点の位置座標、テクスチャ座標、色データ、法線ベクトル或いはα値等)が作成される。そして、この描画データ(プリミティブ面データ)に基づいて、透視変換後(ジオメトリ処理後)のオブジェクト(1又は複数プリミティブ面)を、描画バッファ178(フレームバッファ、ワークバッファ等のピクセル単位で画像情報を記憶できるバッファ)に描画する。これにより、仮想空間において仮想カメラ(所与の視点。左眼用、右眼用の第1、第2の視点)から見える画像が生成される。なお、表示処理部120で行われる描画処理は、頂点シェーダ処理やピクセルシェーダ処理等により実現することができる。
The display processing unit 120 (display processing program module) performs display processing of a game image (simulation image). For example, a drawing process is performed based on the results of various processes (game process, simulation process) performed by the processing unit 100, thereby generating an image and displaying it on the display unit 220. Specifically, geometric processing such as coordinate transformation (world coordinate transformation, camera coordinate transformation), clipping processing, perspective transformation, or light source processing is performed. Based on the processing result, drawing data (the position of the vertex of the primitive surface) Coordinates, texture coordinates, color data, normal vector, α value, etc.) are created. Based on the drawing data (primitive surface data), the object (one or a plurality of primitive surfaces) after perspective transformation (after geometry processing) is converted into image information in units of pixels such as a drawing buffer 178 (frame buffer, work buffer, etc.). Draw in a buffer that can be stored. As a result, an image that can be seen from the virtual camera (given viewpoints, the first and second viewpoints for the left eye and the right eye) in the virtual space is generated. Note that the drawing processing performed by the display processing unit 120 can be realized by vertex shader processing, pixel shader processing, or the like.
音処理部130(音処理のプログラムモジュール)は、処理部100で行われる種々の処理の結果に基づいて音処理を行う。具体的には、楽曲(音楽、BGM)、効果音、又は音声などのゲーム音を生成し、ゲーム音を音出力部192に出力させる。なお音処理部130の音処理の一部(例えば3次元音響処理)を、HMD200の処理部240により実現してもよい。
The sound processing unit 130 (sound processing program module) performs sound processing based on the results of various processes performed by the processing unit 100. Specifically, game sounds such as music (music, BGM), sound effects, or sounds are generated, and the game sounds are output to the sound output unit 192. Note that part of the sound processing of the sound processing unit 130 (for example, three-dimensional sound processing) may be realized by the processing unit 240 of the HMD 200.
出力処理部140は各種の情報の出力処理を行う。例えば出力処理部140は、記憶部170に情報を書き込む処理や、通信部196を介して情報を送信する処理を、出力処理として行う。例えば出力処理部140は、書き込み命令で指定された情報を、記憶部170に書き込む処理や、外部の装置(サーバ等)に対してネットワークを介して情報を送信する処理を行う。送信処理は、通信部196に情報の送信を指示したり、送信する情報を通信部196に指示する処理などである。
The output processing unit 140 performs various types of information output processing. For example, the output processing unit 140 performs processing for writing information in the storage unit 170 and processing for transmitting information via the communication unit 196 as output processing. For example, the output processing unit 140 performs a process of writing information specified by a write command in the storage unit 170 or a process of transmitting information to an external apparatus (server or the like) via a network. The transmission process is a process of instructing the communication unit 196 to transmit information, or instructing the communication unit 196 to transmit information.
そして本実施形態のシミュレーションシステムは、図1に示すように、仮想空間設定部112と、移動体処理部113と、表示処理部120を含む。
And the simulation system of this embodiment contains the virtual space setting part 112, the mobile body process part 113, and the display process part 120, as shown in FIG.
仮想空間設定部112は、オブジェクトが配置される仮想空間の設定処理を行う。例えばユーザに対応するユーザ移動体のオブジェクトや、敵等の相手の移動体のオブジェクトや、マップや背景を構成するオブジェクトを、仮想空間に配置設定する処理を行う。ユーザに対応するユーザ移動体は、例えばユーザが操作部160により操作する移動体や、実空間でのユーザの移動に追従して仮想空間において移動する移動体である。このユーザ移動体は、例えばキャラクタやアバターと呼ばれるものである。ユーザ移動体は、ユーザが搭乗するロボット等の搭乗移動体であってもよい。またユーザ移動体は、その画像が表示される表示物であってもよいし、画像が表示されない仮想的なものであってもよい。
The virtual space setting unit 112 performs processing for setting a virtual space in which objects are arranged. For example, a process is performed in which an object of a user moving object corresponding to the user, an object of an opponent moving object such as an enemy, and an object constituting a map or background are set in the virtual space. The user moving body corresponding to the user is, for example, a moving body that the user operates with the operation unit 160 or a moving body that moves in the virtual space following the movement of the user in the real space. This user moving body is called, for example, a character or an avatar. The user mobile body may be a boarding mobile body such as a robot on which the user is boarding. Further, the user moving body may be a display object on which the image is displayed, or may be a virtual object on which the image is not displayed.
移動体処理部113は、仮想空間において、ユーザに対応するユーザ移動体(仮想カメラ)を移動させる処理を行う。例えば、ユーザが操作部160により入力した操作情報に基づいて、仮想空間においてユーザ移動体を移動させる。或いは、情報取得部111が、実空間でのユーザの位置情報を取得する場合には、移動体処理部113は、取得された位置情報(視点トラッキング情報)に基づいて、仮想空間においてユーザ移動体を移動させる処理を行う。例えば、実空間でのユーザの移動に追従して移動するように、仮想空間においてユーザ移動体を移動させる。例えばユーザ移動体の移動速度や移動加速度に基づいて、ユーザ移動体の位置等をフレーム毎に更新する処理を行って、ユーザ移動体を仮想空間(仮想フィールド)において移動させる。
The moving body processing unit 113 performs processing for moving a user moving body (virtual camera) corresponding to the user in the virtual space. For example, the user moving body is moved in the virtual space based on the operation information input by the user through the operation unit 160. Or when the information acquisition part 111 acquires the positional information on the user in real space, the mobile body process part 113 is a user mobile body in virtual space based on the acquired positional information (viewpoint tracking information). Process to move. For example, the user moving body is moved in the virtual space so as to move following the movement of the user in the real space. For example, based on the moving speed and moving acceleration of the user moving body, a process of updating the position of the user moving body for each frame is performed to move the user moving body in the virtual space (virtual field).
表示処理部120(描画処理部)は、仮想空間の画像(オブジェクト)の描画処理を行う。例えば仮想空間において仮想カメラ(所与の視点)から見える画像の描画処理を行う。例えばキャラクタ(アバター)等のユーザ移動体の視点(一人称視点)に設定された仮想カメラから見える画像の描画処理を行う。或いは、ユーザ移動体に追従する視点(三人称視点)に設定された仮想カメラから見える画像の描画処理を行う。生成される画像は、例えば左眼用画像、右眼用画像などの立体視用の画像であることが望ましい。
The display processing unit 120 (drawing processing unit) performs drawing processing of an image (object) in the virtual space. For example, a drawing process of an image viewed from a virtual camera (a given viewpoint) in the virtual space is performed. For example, a drawing process of an image seen from a virtual camera set as a viewpoint (first person viewpoint) of a user moving body such as a character (avatar) is performed. Alternatively, a drawing process of an image seen from a virtual camera set to a viewpoint (third person viewpoint) that follows the user moving body is performed. The generated image is desirably a stereoscopic image such as a left-eye image or a right-eye image.
そして本実施形態では仮想空間設定部112は、仮想空間として、複数の第1~第M(Mは2以上の整数)の仮想空間を設定する。具体的には仮想空間設定部112は、仮想空間として、第1の仮想空間と第2の仮想空間を設定する。例えば第1の仮想空間を構成するオブジェクトの配置設定処理や、第2の仮想空間を構成するオブジェクトの配置設定処理を行う。第1の仮想空間が、後述するような部屋の空間である場合には、部屋に配置される物体に対応するオブジェクトの配置設定処理を行う。第2の仮想空間が、後述するような氷の国の空間である場合には、氷の国の氷河や海等に対応するオブジェクトの配置設定処理を行う。更に仮想空間設定部112は、例えば第3の仮想空間を設定する。第3の仮想空間が、後述する電車の上の空間である場合には、電車、トンネル、背景等に対応するオブジェクトの配置設定処理を行う。そして、例えば第2の仮想空間は、第1の仮想空間に対して特異点(別の言い方をすれば通過ポイント。以下、同様)を介して結びつけられる仮想空間である。第3の仮想空間は、例えば第1の仮想空間に対して特異点を介して結びつけられる仮想空間であってもよいし、第2の仮想空間に対して特異点を介して結びつけられる仮想空間であってもよい。
In this embodiment, the virtual space setting unit 112 sets a plurality of first to Mth virtual spaces (M is an integer of 2 or more) as the virtual space. Specifically, the virtual space setting unit 112 sets a first virtual space and a second virtual space as virtual spaces. For example, an arrangement setting process for objects constituting the first virtual space and an arrangement setting process for objects constituting the second virtual space are performed. When the first virtual space is a room space as described later, an object placement setting process corresponding to an object placed in the room is performed. When the second virtual space is an ice country space as will be described later, an object placement setting process corresponding to a glacier or sea of the ice country is performed. Furthermore, the virtual space setting unit 112 sets, for example, a third virtual space. When the third virtual space is a space above a train, which will be described later, an object placement setting process corresponding to a train, tunnel, background, or the like is performed. For example, the second virtual space is a virtual space that is linked to the first virtual space via a singular point (in other words, a passing point; hereinafter the same). For example, the third virtual space may be a virtual space that is linked to the first virtual space via a singular point, or may be a virtual space that is linked to the second virtual space via a singular point. There may be.
そして表示処理部120は、所与の切替条件(特異点の通過条件)が成立する前においては(切替条件が成立していない場合には)、ユーザ移動体又は仮想カメラの位置情報(位置座標、方向等)を、第1の仮想空間の位置情報(位置座標、方向等)として設定する。例えばユーザ移動体又は仮想カメラの位置情報を、第1の仮想空間の位置情報として対応づける。そして表示処理部120は、第1の描画処理として、少なくとも第1の仮想空間の画像を描画する処理を行う。例えば表示処理部120は、第1の描画処理として、第1の仮想空間の画像に加えて第2の仮想空間の画像を描画する処理を行って、特異点に対応する領域(特異点を内包する領域等)に第2の仮想空間の画像が表示される画像を生成する。第1の仮想空間の画像の描画処理は、例えば第1の仮想空間を構成するオブジェクトを描画し、第1の仮想空間において仮想カメラから見える画像を描画することなどで実現される。第1の仮想空間の画像に加えて第2の仮想空間の画像を描画する処理は、例えば特異点に対応する領域以外の領域(表示領域)については、第1の仮想空間を構成するオブジェクトを描画し、特異点に対応する領域(表示領域)については、第2の仮想空間を構成するオブジェクトを描画して、第1の仮想空間において仮想カメラから見える画像を描画することなどで実現される。
Then, before the given switching condition (single point passage condition) is satisfied (when the switching condition is not satisfied), the display processing unit 120 detects the position information (position coordinates) of the user moving object or the virtual camera. , Direction, etc.) is set as position information (position coordinates, direction, etc.) of the first virtual space. For example, the position information of the user moving body or the virtual camera is associated as the position information of the first virtual space. Then, the display processing unit 120 performs a process of drawing at least an image of the first virtual space as the first drawing process. For example, as the first drawing process, the display processing unit 120 performs a process of drawing an image of the second virtual space in addition to the image of the first virtual space, and includes an area corresponding to the singular point (including the singular point). An image in which the image of the second virtual space is displayed in a region to be generated) is generated. The drawing process of the image in the first virtual space is realized, for example, by drawing an object configuring the first virtual space and drawing an image that can be seen from the virtual camera in the first virtual space. In the process of drawing the image of the second virtual space in addition to the image of the first virtual space, for example, for an area (display area) other than the area corresponding to the singular point, the object constituting the first virtual space is changed. A region (display region) corresponding to a singular point is rendered by rendering an object constituting the second virtual space and rendering an image that can be viewed from the virtual camera in the first virtual space. .
一方、表示処理部120は、切替条件が成立した場合には、ユーザ移動体又は仮想カメラの位置情報を、第2の仮想空間の位置情報として設定する。例えばユーザ移動体又は仮想カメラが特異点に対応する場所を第1の進行方向で通過することで切替条件が成立した場合には、ユーザ移動体又は仮想カメラの位置情報を、第2の仮想空間の位置情報として設定する。例えばユーザ移動体又は仮想カメラの位置情報を、第1の仮想空間ではなく第2の仮想空間の位置情報として対応づける。そして表示処理部120は、第2の描画処理として、少なくとも第2の仮想空間の画像を描画する処理を行う。例えば表示処理部120は、第2の描画処理として、第2の仮想空間の画像に加えて第1の仮想空間の画像を描画する処理を行って、特異点に対応する領域に第1の仮想空間の画像が表示される画像を生成する。第2の仮想空間の画像の描画処理は、例えば第2の仮想空間を構成するオブジェクトを描画し、第2の仮想空間において仮想カメラから見える画像を描画することなどで実現される。第2の仮想空間の画像に加えて第1の仮想空間の画像を描画する処理は、例えば特異点に対応する領域以外の領域(表示領域)については、第2の仮想空間を構成するオブジェクトを描画し、特異点に対応する領域(表示領域)については、第1の仮想空間を構成するオブジェクトを描画して、第2の仮想空間において仮想カメラから見える画像を描画することなどで実現される。
On the other hand, when the switching condition is satisfied, the display processing unit 120 sets the position information of the user moving body or the virtual camera as the position information of the second virtual space. For example, when the switching condition is satisfied when the user moving body or the virtual camera passes through the place corresponding to the singular point in the first traveling direction, the position information of the user moving body or the virtual camera is stored in the second virtual space. Set as location information. For example, the position information of the user moving body or the virtual camera is associated as the position information of the second virtual space instead of the first virtual space. And the display process part 120 performs the process which draws the image of a 2nd virtual space at least as a 2nd drawing process. For example, as the second drawing process, the display processing unit 120 performs a process of drawing an image of the first virtual space in addition to the image of the second virtual space, and the first virtual space is displayed in the region corresponding to the singular point. An image in which an image of the space is displayed is generated. The drawing process of the image in the second virtual space is realized, for example, by drawing an object constituting the second virtual space and drawing an image that can be seen from the virtual camera in the second virtual space. In the process of drawing the image of the first virtual space in addition to the image of the second virtual space, for example, for an area (display area) other than the area corresponding to the singular point, the object constituting the second virtual space is changed. The region (display region) corresponding to the singular point is drawn and is realized by drawing an object constituting the first virtual space and drawing an image seen from the virtual camera in the second virtual space. .
例えば特異点に対応する領域(オブジェクト又はエリア)が、後述するドア(ゲート)の領域であったとする。この場合に、第1の仮想空間の画像に加えて第2の仮想空間の画像を描画する第1の描画処理では、このドアの領域に対して、ドアの向こう側の第2の仮想空間の画像が表示されるように描画処理を行う。同様に、第2の仮想空間の画像に加えて第1の仮想空間の画像を描画する第2の描画処理では、このドアの領域に対して、ドアの向こう側の第1の仮想空間の画像が表示されるように描画処理を行う。
For example, it is assumed that an area (object or area) corresponding to a singular point is an area of a door (gate) described later. In this case, in the first drawing process for drawing the image of the second virtual space in addition to the image of the first virtual space, the second virtual space on the other side of the door with respect to the door region. Drawing processing is performed so that an image is displayed. Similarly, in the second drawing process for drawing the image of the first virtual space in addition to the image of the second virtual space, the image of the first virtual space on the other side of the door with respect to the door area. The drawing process is performed so that is displayed.
そして表示処理部120は、仮想カメラの視線方向が、第1の進行方向の反対方向側を向いた場合に、特異点に対応する領域に第1の仮想空間の画像が表示される画像を生成する。例えばユーザ移動体又は仮想カメラが特異点に対応する場所を通過後、仮想カメラの視線方向が、その進行方向の反対の方向側に振り返った場合に、当該仮想カメラの視線方向に見える画像として、特異点に対応する領域に第1の仮想空間の画像が表示される画像を生成する。例えば、第2の仮想空間の画像に加えて第1の仮想空間の画像を描画する処理を行うことで、特異点に対応する領域以外の領域(主領域)については、第2の仮想空間を構成するオブジェクトが描画され、特異点に対応する領域については、第1の仮想空間を構成するオブジェクトを描画される画像を生成する。一方、特異点に対応する場所の裏側に回り込んで、特異点に対応する場所の方に、仮想カメラの視線方向が向いた場合には、第2の描画処理として、第2の仮想空間の画像を描画する処理を行い、第1の仮想空間の画像を描画する処理は行わないようにする。具体的には、特異点に対応する領域に対しても第2の仮想空間の画像を描画する。なお第1の進行方向の反対方向側とは、第1の進行方向の反対方向そのものである必要はなく、第1の進行方向を例えば正の方向とした場合に、例えば負の方向に相当する。
Then, the display processing unit 120 generates an image in which an image of the first virtual space is displayed in a region corresponding to the singular point when the line-of-sight direction of the virtual camera faces the opposite direction side of the first traveling direction. To do. For example, after the user moving body or virtual camera passes through a place corresponding to a singular point, when the visual line direction of the virtual camera turns to the opposite direction side of the traveling direction, as an image that can be seen in the visual line direction of the virtual camera, An image in which an image of the first virtual space is displayed in a region corresponding to the singular point is generated. For example, by performing the process of drawing the image of the first virtual space in addition to the image of the second virtual space, the second virtual space is changed for the region (main region) other than the region corresponding to the singular point. For the area corresponding to the singular point, an image that draws the object constituting the first virtual space is generated. On the other hand, when the direction of the line of sight of the virtual camera is turned to the back side of the place corresponding to the singular point and the direction corresponding to the singular point is directed to the place corresponding to the singular point, the second drawing process A process of drawing an image is performed, and a process of drawing an image of the first virtual space is not performed. Specifically, the image of the second virtual space is drawn also in the region corresponding to the singular point. The direction opposite to the first direction of travel does not have to be the direction opposite to the first direction of travel, and corresponds to, for example, a negative direction when the first direction of travel is a positive direction. .
ここで特異点は、ある基準の下、その基準が適用できなくなるポイントである。例えば第1の仮想空間でユーザ移動体又は仮想カメラが移動している場合に、当該ユーザ移動体又は仮想カメラに対しては、第1の仮想空間内だけにおいて移動するという基準である法則が適用される。即ち、ユーザ移動体又は仮想カメラは、第1の仮想空間内において移動するという法則の下で移動する。本実施形態の特異点は、このような基準である法則が適用されず、当該法則に従わなくなるポイントである。例えば本実施形態では、特異点の通過により切替条件が成立すると、第1の仮想空間内で移動するという法則(基準)がユーザ移動体又は仮想カメラに対して適用されなくなり、当該ユーザ移動体又は仮想カメラは、第1の仮想空間とは異なる第2の仮想空間で移動するようになる。例えば特異点は、ユーザ移動体又は仮想カメラの位置情報を、第1の仮想空間の位置情報に対応づけるか、第2の仮想空間の位置情報に対応づけるかを、切り替えるポイントと言うことができる。そして切替条件の成立により、ユーザ移動体又は仮想カメラの位置情報との対応づけが切り替わるように、第1の仮想空間と第2の仮想空間は特異点を介して結びつけられている。例えば第1の仮想空間と第2の仮想空間を特異点を介して結びつけるための情報が記憶部170に記憶される。本実施形態の切替条件は、このようなユーザ移動体又は仮想カメラの位置情報と第1、第2の仮想空間の位置情報の対応づけを、特異点により切り替える条件である。第1、第2の仮想空間との第3の仮想空間との間の特異点による結びつきも同様である。なお特異点に対応する場所は、例えば点である必要はなく、面や領域であってもよい。例えば特異点を内包する面や領域であってもよい。
Here, a singular point is a point at which the standard cannot be applied under a certain standard. For example, when a user moving object or a virtual camera is moving in the first virtual space, the rule that is a criterion for moving only in the first virtual space is applied to the user moving object or the virtual camera. Is done. That is, the user moving body or the virtual camera moves under the law of moving in the first virtual space. The singular point of the present embodiment is a point at which such a rule, which is the reference, is not applied and the rule is not followed. For example, in this embodiment, when the switching condition is satisfied by passing a singular point, the law (reference) of moving in the first virtual space is not applied to the user moving body or the virtual camera. The virtual camera moves in a second virtual space different from the first virtual space. For example, the singular point can be said to be a switching point for associating the position information of the user moving body or the virtual camera with the position information of the first virtual space or the position information of the second virtual space. . The first virtual space and the second virtual space are linked via a singular point so that the association with the position information of the user moving body or the virtual camera is switched by the establishment of the switching condition. For example, information for connecting the first virtual space and the second virtual space via a singular point is stored in the storage unit 170. The switching condition of the present embodiment is a condition for switching the correspondence between the position information of such a user moving body or virtual camera and the position information of the first and second virtual spaces by a singular point. The same is true for the singularity connection between the first and second virtual spaces and the third virtual space. The location corresponding to the singular point need not be a point, for example, and may be a surface or a region. For example, it may be a surface or region including a singular point.
また表示処理部120は、ユーザ移動体又は仮想カメラが、第1の進行方向とは異なる第2の進行方向で特異点を通過した場合には、ユーザ移動体又は仮想カメラの位置情報を、第1の仮想空間の位置情報として設定する。そして表示処理部120は、第1の描画処理として、少なくとも第1の仮想空間の画像を描画する処理を行う。例えば第1の描画処理として、第1の仮想空間の画像を描画する、或いは第1の仮想空間の画像に加えて第2の仮想空間の画像を描画する処理を行う。例えば特異点を介して第1の仮想空間から第2の仮想空間に移動したユーザ移動体又は仮想カメラが、第1の仮想空間に戻った場合には、ユーザ移動体又は仮想カメラの位置情報を、第1の仮想空間の位置情報として対応づけて、第1の仮想空間での第1の描画処理を行う。このようにすることで、ユーザ移動体等は、第1の仮想空間と第2の仮想空間を、特異点を介して行き来できるようになる。また、第2の仮想空間内だけにおいて移動するという法則(基準)が適用されていたユーザ移動体又は仮想カメラは、第2の進行方向で特異点に対応する場所を通過することにより、当該法則が適用されなくなり、第1の仮想空間内で移動できるようになる。
In addition, when the user moving body or the virtual camera passes the singular point in the second traveling direction different from the first traveling direction, the display processing unit 120 displays the position information of the user moving body or the virtual camera as the first information. It is set as position information of one virtual space. Then, the display processing unit 120 performs a process of drawing at least an image of the first virtual space as the first drawing process. For example, as the first drawing process, a process of drawing an image of the first virtual space or drawing an image of the second virtual space in addition to the image of the first virtual space is performed. For example, when a user moving body or virtual camera that has moved from the first virtual space to the second virtual space via a singular point returns to the first virtual space, the position information of the user moving body or virtual camera is obtained. The first drawing process in the first virtual space is performed in association with the position information of the first virtual space. By doing in this way, a user moving body etc. can come and go between the 1st virtual space and the 2nd virtual space via a singular point. In addition, the user moving object or the virtual camera to which the rule (standard) of moving only in the second virtual space is applied passes the place corresponding to the singular point in the second advancing direction, so that the rule Can no longer be applied and can be moved in the first virtual space.
ここで第2の進行方向は、第1の進行方向とは異なる方向である。例えば第1の進行方向を、特異点(特異点の場所)を基準とした正の方向とした場合に、第2の進行方向は、特異点を基準とした負の方向になる。一例としては第2の進行方向は第1の進行方向の反対方向である。
Here, the second traveling direction is different from the first traveling direction. For example, when the first traveling direction is a positive direction based on the singular point (location of the singular point), the second traveling direction is a negative direction based on the singular point. As an example, the second traveling direction is opposite to the first traveling direction.
また表示処理部120は、ユーザ移動体又は仮想カメラが特異点(第2の特異点)に対応する場所を通過することで切替条件(第2の切替条件)が成立した場合に、ユーザ移動体又は仮想カメラの位置情報を、第3の仮想空間の位置情報として設定する。例えばユーザ移動体の位置情報を、第3の仮想空間の位置情報として対応づける。ここで第3の仮想空間への移動は、第1の仮想空間と第3の仮想空間を結びつける特異点を介した第1の仮想空間から第3の仮想空間への移動であってもよいし、第2の仮想空間と第3の仮想空間を結びつける特異点を介した第2の仮想空間から第3の仮想空間への移動であってもよい。そして表示処理部120は、第3の描画処理として、少なくとも第3の仮想空間の画像を描画する処理を行う。例えば第3の仮想空間が、第1の仮想空間と特異点を介して結びつけられている場合には、第3の描画処理として、例えば第3の仮想空間の画像を描画する、或いは第3の仮想空間の画像に加えて第1の仮想空間の画像を描画する処理を行う。また第3の仮想空間が、第2の仮想空間と特異点を介して結びつけられている場合には、第3の描画処理として、例えば第3の仮想空間の画像を描画する、或いは第3の仮想空間の画像に加えて第2の仮想空間の画像を描画する処理を行う。
In addition, the display processing unit 120 displays the user moving body when the switching condition (second switching condition) is satisfied when the user moving body or the virtual camera passes through a place corresponding to the singular point (second singular point). Alternatively, the position information of the virtual camera is set as the position information of the third virtual space. For example, the position information of the user moving body is associated as the position information of the third virtual space. Here, the movement to the third virtual space may be a movement from the first virtual space to the third virtual space via a singular point that connects the first virtual space and the third virtual space. The movement from the second virtual space to the third virtual space may be performed via a singular point that connects the second virtual space and the third virtual space. And the display process part 120 performs the process which draws the image of a 3rd virtual space at least as a 3rd drawing process. For example, when the third virtual space is connected to the first virtual space via a singular point, for example, an image of the third virtual space is drawn as the third drawing process, In addition to the virtual space image, a process of drawing the first virtual space image is performed. Further, when the third virtual space is linked to the second virtual space via a singular point, for example, an image of the third virtual space is drawn as the third drawing process, In addition to the virtual space image, a process of drawing the second virtual space image is performed.
また表示処理部120は、複数のユーザがプレイする場合に、複数のユーザに対応する複数のユーザ移動体又は複数の仮想カメラが特異点に対応する場所を通過して第1の仮想空間に戻ったことを条件に、第3の描画処理を許可する。例えば第1のユーザに対応する第1のユーザ移動体又は第1の仮想カメラは、第2の仮想空間から第1の仮想空間に戻ったが、第2のユーザに対応する第2のユーザ移動体又は第2の仮想カメラは、第2の仮想空間から第1の仮想空間に戻っていなかったとする。この場合には、切替条件(第2の切替条件)が成立した場合にも、特異点を介した第1のユーザ移動体又は第1の仮想カメラの第3の仮想空間への移動を許可せず、第3の描画処理を許可しない。一方、第1及び第2のユーザ移動体の両方が、或いは第1及び第2の仮想カメラの両方が、第2の仮想空間から第1の仮想空間に戻った後に、切替条件が成立した場合には、特異点を介した第1、第2のユーザ移動体或いは第1、第2の仮想カメラの第3の仮想空間への移動を許可して、第3の描画処理を許可する。
Further, when a plurality of users play, the display processing unit 120 returns to the first virtual space through a plurality of user moving bodies corresponding to the plurality of users or a plurality of virtual cameras passing through a place corresponding to the singular point. The third drawing process is permitted on the condition. For example, the first user moving body or the first virtual camera corresponding to the first user has returned from the second virtual space to the first virtual space, but the second user movement corresponding to the second user. It is assumed that the body or the second virtual camera has not returned from the second virtual space to the first virtual space. In this case, even when the switching condition (second switching condition) is satisfied, the movement of the first user moving body or the first virtual camera to the third virtual space via the singular point is permitted. The third drawing process is not permitted. On the other hand, when both the first and second user moving bodies or both the first and second virtual cameras return from the second virtual space to the first virtual space and then the switching condition is satisfied. In this case, the movement of the first and second user moving bodies or the first and second virtual cameras via the singular point to the third virtual space is permitted, and the third drawing process is permitted.
また表示処理部120は、ユーザ移動体又は仮想カメラが特異点を通過したか否かに基づいて、切替条件が成立したか否かを判断する。例えばユーザ移動体又は仮想カメラが特異点を通過していない場合には、切替条件は成立していないと判断し、ユーザ移動体又は仮想カメラが特異点を通過した場合には、切替条件が成立したと判断する。
Further, the display processing unit 120 determines whether the switching condition is satisfied based on whether the user moving body or the virtual camera has passed the singular point. For example, when the user moving body or virtual camera does not pass the singular point, it is determined that the switching condition is not satisfied, and when the user moving body or virtual camera passes the singular point, the switching condition is satisfied. Judge that
また表示処理部120は、ユーザの入力情報又はセンサの検出情報に基づいて、切替条件が成立したか否かを判断してもよい。例えば表示処理部120は、操作部160を介して入力されたユーザの操作情報や音声入力情報などに基づいて、切替条件が成立したか否かを判断する。例えば切替条件の成立を指示するユーザの入力情報などに基づいて判断する。或いは表示処理部120は、センサの検出情報に基づいて、切替条件が成立したか否かを判断する。例えば特異点に対応する物体にセンサを設け、このセンサの検出情報に基づいて、切替条件が成立したか否かを判断する。当該物体がドア(ゲート)である場合には、ドアの開閉状態をセンサの検出情報に基づいて検出することで、切替条件が成立したか否かを判断する。
Further, the display processing unit 120 may determine whether or not the switching condition is satisfied based on user input information or sensor detection information. For example, the display processing unit 120 determines whether the switching condition is satisfied based on user operation information, voice input information, and the like input via the operation unit 160. For example, the determination is made based on input information of a user instructing establishment of the switching condition. Alternatively, the display processing unit 120 determines whether the switching condition is satisfied based on the detection information of the sensor. For example, a sensor is provided on an object corresponding to a singular point, and it is determined whether a switching condition is satisfied based on detection information of the sensor. When the object is a door (gate), it is determined whether the switching condition is satisfied by detecting the open / closed state of the door based on the detection information of the sensor.
また本実施形態では、ユーザが移動する実空間のプレイフィールド(プレイ空間)には、特異点に対応する物体が配置されている。後述の部屋の実空間の場合には、例えば特異点に対応するドアの物体が配置されている。そして表示処理部120は、ユーザが実空間において当該物体の場所を通過した場合に、ユーザ移動体又は仮想カメラが、特異点を通過したと判断する。例えば特異点の通過条件を、実空間において当該物体の場所をユーザが通過したか否かで判断する。そして、当該物体の場所をユーザが通過した場合には、切替条件が満たされたと判断して、ユーザ移動体又は仮想カメラの位置情報を第2の仮想空間の位置情報として設定して、第2の描画処理を行う。
In this embodiment, an object corresponding to a singular point is arranged in a play field (play space) in a real space where the user moves. In the case of a real space in a room described later, for example, a door object corresponding to a singular point is arranged. Then, the display processing unit 120 determines that the user moving object or the virtual camera has passed the singular point when the user passes the location of the object in the real space. For example, the passage condition of the singular point is determined by whether or not the user has passed the location of the object in the real space. Then, when the user passes the location of the object, it is determined that the switching condition is satisfied, the position information of the user moving body or the virtual camera is set as the position information of the second virtual space, and the second The drawing process is performed.
また表示処理部120は、特異点を設定又は非設定にする処理を行ってもよい。例えば非設定状態であった特異点を設定状態にしたり、設定状態であった特異点を非設定状態にする。特異点の設定の処理は、例えば特異点に対応するオブジェクトを第1~第3の仮想空間等の仮想空間に表示したり、特異点を介した仮想空間の間の移動を許可する処理である。特異点の非設定の処理は、例えば特異点に対応するオブジェクトを第1~第3の仮想空間等の仮想空間において非表示にしたり、特異点を介した仮想空間の間の移動を非許可にする処理である。
Further, the display processing unit 120 may perform processing for setting or not setting a singular point. For example, a singular point that has not been set is set to a set state, or a singular point that has been set is set to a non-set state. The singularity setting process is, for example, a process of displaying an object corresponding to a singularity in a virtual space such as the first to third virtual spaces or permitting movement between virtual spaces via singularities. . For example, the singularity non-setting process hides an object corresponding to a singularity in a virtual space such as the first to third virtual spaces, or disallows movement between virtual spaces via singularities. It is processing to do.
また本実施形態のシミュレーションシステムは、体感装置制御部117を含む。体感装置制御部117は、ユーザに仮想現実を体験させるための体感装置を制御する。体感装置は、例えばユーザの視覚器官以外の感覚器官に働きかけてユーザに仮想現実を体験させるための装置である。体感装置は、例えば後述する送風機や振動デバイスである。送風機は例えばシロッコファンやプロペラファンなどにより実現できる。送風機は熱風や冷風を吹き出すものであってもよい。振動デバイスは、例えばトランスデューサや振動モータなどにより実現できる。また体感装置はハロゲンヒータなどであってもよい。例えばハロゲンランプからの放射熱によって、ユーザに熱や熱風を感じさせる体感装置であってもよい。また体感装置は、例えばエアバネや電動シリンダなどの機械機構により実現されるものであってもよい。例えばエアバネや電動シリンダなどの機械機構により、ユーザに揺れや傾きを感じさせるような体感装置であってもよい。
Also, the simulation system of the present embodiment includes a sensation device control unit 117. The sensation device control unit 117 controls the sensation device for allowing the user to experience virtual reality. The bodily sensation device is a device for causing a user to experience virtual reality by working on a sensory organ other than the visual organ of the user, for example. The sensation apparatus is, for example, a blower or a vibration device described later. The blower can be realized by, for example, a sirocco fan or a propeller fan. The blower may blow hot air or cold air. The vibration device can be realized by, for example, a transducer or a vibration motor. The body sensation device may be a halogen heater or the like. For example, it may be a sensation device that allows the user to feel heat or hot air by radiant heat from a halogen lamp. The body sensation device may be realized by a mechanical mechanism such as an air spring or an electric cylinder. For example, it may be a bodily sensation device that makes the user feel shaking or tilting by a mechanical mechanism such as an air spring or an electric cylinder.
そして体感装置制御部117は、切替条件が成立していない場合の体感装置の制御と、切替条件が成立した場合の体感装置の制御を異ならせる。例えば体感装置制御部117は、切替条件が成立していない場合(切替条件の成立前)には、体感装置を動作させず、切替条件が成立した場合(切替条件の成立後)に、体感装置を動作させてもよい。或いは、切替条件が成立していない場合と切替条件が成立した場合とで、体感装置の制御態様や制御する体感装置の種類を異ならせてもよい。例えば切替条件が成立していない場合には、第1の制御態様で体感装置を制御し、切替条件が成立した場合には、第1の制御態様とは異なる第2の制御態様で体感装置を制御する。第1の制御態様と第2の制御態様とでは、ユーザに感じさせる体感の度合いが異なっている。送風機を例にとれば、第1の制御態様と第2の制御態様とでは、送風の強度が違ったり、風の温度が異なる。振動デバイスを例にとれば、第1の制御態様と第2の制御態様とでは、振動の強度や振動時間が異なる。或いは、切替条件が成立していない場合には、第1の種類の体感装置を制御し、切替条件が成立した場合には、第2の種類の体感装置を制御してもよい。即ち、切替条件の成立したか否かに応じて、制御対象となる体感装置の種類を異ならせる。
Then, the sensation device control unit 117 makes the control of the sensation device when the switching condition is not satisfied and the control of the sensation device when the switching condition is satisfied. For example, the sensation device control unit 117 does not operate the sensation device when the switching condition is not satisfied (before the switching condition is satisfied), and when the switching condition is satisfied (after the switching condition is satisfied), May be operated. Alternatively, the control mode of the sensation apparatus and the type of sensation apparatus to be controlled may be different depending on whether the switching condition is not satisfied or when the switching condition is satisfied. For example, when the switching condition is not satisfied, the sensation apparatus is controlled in the first control mode, and when the switching condition is satisfied, the sensation apparatus is controlled in the second control mode different from the first control mode. Control. The first control mode and the second control mode differ in the degree of sensation experienced by the user. Taking a blower as an example, the first control mode and the second control mode have different blower strengths or different wind temperatures. Taking a vibrating device as an example, the first control mode and the second control mode differ in vibration intensity and vibration time. Alternatively, when the switching condition is not satisfied, the first type of sensation apparatus may be controlled, and when the switching condition is satisfied, the second type of sensation apparatus may be controlled. That is, depending on whether or not the switching condition is satisfied, the type of the sensation apparatus to be controlled is varied.
またシミュレーションシステムは報知処理部116を含む。報知処理部116は、複数のユーザがプレイする場合に、実空間でのユーザ間の衝突の報知情報の出力処理を行う。例えば情報取得部111により取得されたユーザの位置情報に基づいて、ユーザ同士の衝突についての警告の報知処理を行う。例えばユーザが衝突の位置関係(接近関係)になるかについての予測処理を行い、そのような位置関係になった場合には、衝突のおそれあることを警告する報知処理を行う。予測処理は、各ユーザに対応する各ユーザ移動体の位置、速度又は加速度などに基づいて、ユーザ同士が衝突の位置関係になるおそれがあるかを判断することなどで実現できる。
The simulation system also includes a notification processing unit 116. When a plurality of users play, the notification processing unit 116 performs an output process of notification information on collision between users in real space. For example, based on the user's position information acquired by the information acquisition unit 111, a warning notification process for a collision between users is performed. For example, a prediction process is performed as to whether or not the user is in a collision positional relationship (approaching relationship). If such a positional relationship is reached, a notification process for warning that there is a possibility of a collision is performed. The prediction process can be realized by determining whether or not there is a possibility that the users have a positional relationship of collision based on the position, speed, acceleration, or the like of each user moving body corresponding to each user.
なお警告の報知処理は、HMD200に表示される画像や、ヘッドホンやプレイフィールドに設置されるスピーカなどから出力される音や、ユーザの武器、着衣又は装飾品などの装備品などに設けられる振動デバイスによる振動や、或いは実空間のフィールドに設けられる各種の体感装置(風、振動、光、空気砲又は音等による体感装置)などにより実現できる。
Note that the warning notification processing is an image displayed on the HMD 200, sound output from a headphone or a speaker installed in a play field, or a vibration device provided in equipment such as a user's weapon, clothing, or decoration. It can be realized by vibrations caused by the above, or various types of sensation devices (sensation devices using wind, vibration, light, air cannon, sound, etc.) provided in a real space field.
また図1に示すように本実施形態のシミュレーションシステムは、実空間でのユーザの位置情報を取得する情報取得部111を含む。例えば情報取得部111は、ユーザの視点トラッキングなどによりユーザの位置情報を取得する。そして移動体処理部113は、取得された位置情報に基づいて、ユーザ移動体を移動させる処理を行い、表示処理部120は、ユーザが装着するHMD200の表示画像を生成する。例えば実空間でのユーザの移動に追従するように、仮想空間でのユーザ移動体を移動させる。そして、そのユーザ移動体に対応する仮想カメラから見える画像を、HMD200の表示画像として生成する。そして表示処理部120は、切替条件が成立する前においては(切替条件が成立していない場合には)、実空間でのユーザの位置情報により特定されるユーザ移動体又は仮想カメラの位置情報を、第1の仮想空間の位置情報として設定する。例えば、ユーザ移動体又は仮想カメラの位置情報を、第1の仮想空間の位置情報として対応づけて、第1の描画処理を行う。例えば第1の描画処理として、少なくとも第1の仮想空間の画像を描画する処理を行う。一方、切替条件が成立した場合には、実空間でのユーザの位置情報により特定されるユーザ移動体又は仮想カメラの位置情報を、第2の仮想空間の位置情報として設定する。例えば、ユーザ移動体又は仮想カメラの位置情報を、第2の仮想空間の位置情報として対応づけて、第2の描画処理を行う。例えば第2の描画処理として、少なくとも第2の仮想空間の画像を描画する処理を行う。
As shown in FIG. 1, the simulation system of this embodiment includes an information acquisition unit 111 that acquires position information of the user in real space. For example, the information acquisition unit 111 acquires user position information through user viewpoint tracking or the like. The moving body processing unit 113 performs processing for moving the user moving body based on the acquired position information, and the display processing unit 120 generates a display image of the HMD 200 worn by the user. For example, the user moving body in the virtual space is moved so as to follow the movement of the user in the real space. Then, an image that can be seen from the virtual camera corresponding to the user moving object is generated as a display image of the HMD 200. Then, before the switching condition is satisfied (when the switching condition is not satisfied), the display processing unit 120 displays the position information of the user moving body or the virtual camera specified by the user position information in the real space. , And set as position information of the first virtual space. For example, the first drawing process is performed by associating the position information of the user moving body or the virtual camera as the position information of the first virtual space. For example, as the first drawing process, at least a process of drawing an image in the first virtual space is performed. On the other hand, when the switching condition is satisfied, the position information of the user moving body or the virtual camera specified by the position information of the user in the real space is set as the position information of the second virtual space. For example, the second drawing process is performed by associating the position information of the user moving body or the virtual camera as the position information of the second virtual space. For example, as the second drawing process, a process of drawing at least an image of the second virtual space is performed.
例えば情報取得部111は、視界を覆うようにHMD200を装着するユーザの位置情報を取得する。例えば情報取得部111は、HMD200のトラッキング情報などに基づいて、実空間でのユーザの位置情報を取得する。例えばHMD200の位置情報を、当該HMD200を装着するユーザの位置情報として取得する。具体的には、ユーザが実空間(現実世界)のプレイフィールド(シミュレーションフィールド、プレイエリア)に位置する場合に、そのプレイフィールドでの位置情報を取得する。なお、HMD200のトラッキング処理ではなくて、ユーザやユーザの頭部などの部位を直接にトラッキングする手法により、ユーザの位置情報を取得してもよい。
For example, the information acquisition unit 111 acquires position information of a user who wears the HMD 200 so as to cover the field of view. For example, the information acquisition unit 111 acquires the position information of the user in real space based on the tracking information of the HMD 200 and the like. For example, the position information of the HMD 200 is acquired as the position information of the user wearing the HMD 200. Specifically, when the user is located in a play field (simulation field, play area) in real space (real world), position information in the play field is acquired. Note that the position information of the user may be acquired by a method of directly tracking a part such as the user or the user's head instead of the tracking process of the HMD 200.
また仮想カメラ制御部114は、ユーザの視点情報のトラッキング情報に基づいて、ユーザの視点変化に追従するように仮想カメラを制御する。
Further, the virtual camera control unit 114 controls the virtual camera so as to follow the change of the user's viewpoint based on the tracking information of the user's viewpoint information.
例えば入力処理部102(入力受け付け部)は、HMD200を装着するユーザの視点情報のトラッキング情報を取得する。例えばユーザの視点位置、視線方向の少なくとも1つである視点情報のトラッキング情報(視点トラッキング情報)を取得する。このトラッキング情報は、例えばHMD200のトラッキング処理を行うことで取得できる。なおトラッキング処理によりユーザの視点位置、視線方向を直接に取得するようにしてもよい。一例としては、トラッキング情報は、ユーザの初期視点位置からの視点位置の変化情報(視点位置の座標の変化値)、及び、ユーザの初期視線方向からの視線方向の変化情報(視線方向の回転軸回りでの回転角度の変化値)の少なくとも一方を含むことができる。このようなトラッキング情報が含む視点情報の変化情報に基づいて、ユーザの視点位置や視線方向(ユーザの頭部の位置、姿勢の情報)を特定できる。
For example, the input processing unit 102 (input reception unit) acquires tracking information of viewpoint information of a user wearing the HMD 200. For example, tracking information (viewpoint tracking information) of viewpoint information that is at least one of the user's viewpoint position and line-of-sight direction is acquired. This tracking information can be acquired by performing tracking processing of the HMD 200, for example. Note that the user's viewpoint position and line-of-sight direction may be directly acquired by tracking processing. As an example, the tracking information includes change information of the viewpoint position from the initial viewpoint position of the user (change value of the coordinate of the viewpoint position), and change information of the gaze direction from the user's initial gaze direction (rotation axis in the gaze direction). At least one of the rotation angle change values around the rotation angle). Based on the change information of the viewpoint information included in such tracking information, the user's viewpoint position and line-of-sight direction (information on the user's head position and posture) can be specified.
また本実施形態では、ユーザがプレイするゲームのゲーム処理として、仮想現実のシミュレーション処理を行う。仮想現実のシミュレーション処理は、実空間での事象を仮想空間で模擬するためのシミュレーション処理であり、当該事象をユーザに仮想体験させるための処理である。例えば実空間のユーザに対応する仮想ユーザやその搭乗移動体などの移動体を、仮想空間で移動させたり、移動に伴う環境や周囲の変化をユーザに体感させるための処理を行う。
In this embodiment, a virtual reality simulation process is performed as the game process of the game played by the user. The virtual reality simulation process is a simulation process for simulating an event in the real space in the virtual space, and is a process for causing the user to experience the event virtually. For example, a virtual user corresponding to a user in real space or a moving body such as a boarding moving body is moved in the virtual space, or processing for causing the user to experience changes in the environment and surroundings associated with the movement is performed.
なお図1の本実施形態のシミュレーションシステムの処理は、施設に設置されるPC等の処理装置、ユーザが装着する処理装置、或いはこれらの処理装置の分散処理などにより実現できる。或いは、本実施形態のシミュレーションシステムの処理を、サーバシステムと端末装置により実現してもよい。例えばサーバシステムと端末装置の分散処理などにより実現してもよい。
Note that the processing of the simulation system of this embodiment in FIG. 1 can be realized by a processing device such as a PC installed in a facility, a processing device worn by a user, or distributed processing of these processing devices. Or you may implement | achieve the process of the simulation system of this embodiment with a server system and a terminal device. For example, it may be realized by distributed processing of a server system and a terminal device.
2.トラッキング処理
次にトラッキング処理の例について説明する。図2(A)に本実施形態のシミュレーションシステムに用いられるHMD200の一例を示す。図2(A)に示すようにHMD200には複数の受光素子201、202、203(フォトダイオード)が設けられている。受光素子201、202はHMD200の前面側に設けられ、受光素子203はHMD200の右側面に設けられている。またHMDの左側面、上面等にも不図示の受光素子が設けられている。 2. Tracking Processing Next, an example of tracking processing will be described. FIG. 2A shows an example of theHMD 200 used in the simulation system of this embodiment. As shown in FIG. 2A, the HMD 200 is provided with a plurality of light receiving elements 201, 202, and 203 (photodiodes). The light receiving elements 201 and 202 are provided on the front side of the HMD 200, and the light receiving element 203 is provided on the right side of the HMD 200. In addition, a light receiving element (not shown) is also provided on the left side, upper surface, and the like of the HMD.
次にトラッキング処理の例について説明する。図2(A)に本実施形態のシミュレーションシステムに用いられるHMD200の一例を示す。図2(A)に示すようにHMD200には複数の受光素子201、202、203(フォトダイオード)が設けられている。受光素子201、202はHMD200の前面側に設けられ、受光素子203はHMD200の右側面に設けられている。またHMDの左側面、上面等にも不図示の受光素子が設けられている。 2. Tracking Processing Next, an example of tracking processing will be described. FIG. 2A shows an example of the
なお、ユーザUSの手や指の動きは、いわゆるリープモーションと呼ばれる処理により検出できる。例えば不図示のコントローラがHMD200等に取り付けられており、このコントローラに基づいて、手や指の動きの検出処理を実現する。このコントローラは、例えば赤外線を照射するLED等の発光部と、赤外線に照らされた手や指を撮影する複数台の赤外線カメラを有する。そして赤外線カメラの撮影画像の画像解析結果に基づいて、手や指の動きを検出する。このようなコントローラを設けることで、ドアを開ける際のユーザの手や指の動きなどを検出できるようになる。
It should be noted that the movement of the user's US hand or finger can be detected by a so-called leap motion process. For example, a controller (not shown) is attached to the HMD 200 or the like, and based on this controller, a motion detection process for hands and fingers is realized. This controller has, for example, a light emitting unit such as an LED that emits infrared rays, and a plurality of infrared cameras that photograph a hand or a finger illuminated by infrared rays. Based on the image analysis result of the image captured by the infrared camera, the movement of the hand or finger is detected. By providing such a controller, it becomes possible to detect the movement of the user's hand or finger when the door is opened.
またHMD200には、ヘッドバンド260等が設けられており、ユーザUSは、より良い装着感で安定的に頭部にHMD200を装着できるようになっている。また、HMD200には、不図示のヘッドホン端子が設けられており、このヘッドホン端子にヘッドホン270(音出力部192)を接続することで、例えば3次元音響(3次元オーディオ)の処理が施されたゲーム音を、ユーザUSは聴くことが可能になる。なお、ユーザUSの頭部の頷き動作や首振り動作をHMD200のセンサ部210等により検出することで、ユーザUSの操作情報を入力できるようにしてもよい。
In addition, the HMD 200 is provided with a headband 260 and the like so that the user US can stably wear the HMD 200 on the head with a better wearing feeling. In addition, the HMD 200 is provided with a headphone terminal (not shown). By connecting a headphone 270 (sound output unit 192) to the headphone terminal, for example, processing of three-dimensional sound (three-dimensional audio) is performed. The user US can listen to the game sound. Note that the operation information of the user US may be input by detecting the head movement or the swinging movement of the user US by the sensor unit 210 of the HMD 200 or the like.
またユーザUSは、不図示の処理装置(バックパックPC)を例えば背中に装着している。例えばユーザUSはジャケットを着ており、このジャケットの背面側に処理装置が取り付けられている。処理装置は例えばノート型PC等の情報処理装置により実現される。そしてこの処理装置とHMD200は不図示のケーブルにより接続されている。例えば処理装置は、HMD200に表示される画像(ゲーム画像等)の生成処理を行い、生成された画像のデータがケーブルを介してHMD200に送られ、HMD200に表示される。この処理装置は、このような画像の生成処理以外にも、本実施形態の各処理(情報取得処理、仮想空間設定処理、移動体処理、仮想カメラ制御処理、ゲーム処理、報知処理、体感装置制御処理、表示処理又は音処理等)を行うことが可能になっている。なお、本実施形態の各処理を、施設に設置されたPC等の処理装置(不図示)により実現したり、当該処理装置と、ユーザUSが装着する処理装置の分散処理により実現してもよい。
Also, the user US wears a processing device (backpack PC) (not shown) on his back, for example. For example, the user US wears a jacket, and a processing device is attached to the back side of the jacket. The processing device is realized by an information processing device such as a notebook PC. And this processing apparatus and HMD200 are connected by the cable not shown. For example, the processing device performs processing for generating an image (game image or the like) displayed on the HMD 200, and the generated image data is sent to the HMD 200 via a cable and displayed on the HMD 200. In addition to such image generation processing, this processing apparatus is also capable of performing each processing (information acquisition processing, virtual space setting processing, moving body processing, virtual camera control processing, game processing, notification processing, and sensation device control of this embodiment. Processing, display processing, sound processing, or the like). Note that each processing of the present embodiment may be realized by a processing device (not shown) such as a PC installed in a facility, or may be realized by distributed processing of the processing device and a processing device worn by the user US. .
図2(B)に示すように、シミュレーションシステムの周辺には、ベースステーション280、284が設置されている。ベースステーション280には発光素子281、282が設けられ、ベースステーション284には発光素子285、286が設けられている。発光素子281、282、285、286は、例えばレーザー(赤外線レーザー等)を出射するLEDにより実現される。ベースステーション280、284は、これら発光素子281、282、285、286を用いて、例えばレーザーを放射状に出射する。そして図2(A)のHMD200に設けられた受光素子201~203等が、ベースステーション280、284からのレーザーを受光することで、HMD200のトラッキングが実現され、ユーザUSの頭の位置や向く方向(視点位置、視線方向)を検出できるようになる。
As shown in FIG. 2B, base stations 280 and 284 are installed around the simulation system. The base station 280 is provided with light emitting elements 281 and 282, and the base station 284 is provided with light emitting elements 285 and 286. The light emitting elements 281, 282, 285, and 286 are realized by LEDs that emit laser (infrared laser or the like), for example. The base stations 280 and 284 use these light emitting elements 281, 282, 285, and 286 to emit, for example, a laser beam radially. The light receiving elements 201 to 203 and the like provided in the HMD 200 in FIG. 2A receive the laser beams from the base stations 280 and 284, thereby realizing the tracking of the HMD 200 and the position and direction of the head of the user US. (Viewpoint position, line-of-sight direction) can be detected.
図3(A)にHMD200の他の例を示す。図3(A)では、HMD200に対して複数の発光素子231~236が設けられている。これらの発光素子231~236は例えばLEDなどにより実現される。発光素子231~234は、HMD200の前面側に設けられ、発光素子235や不図示の発光素子236は、背面側に設けられる。これらの発光素子231~236は、例えば可視光の帯域の光を出射(発光)する。具体的には発光素子231~236は、互いに異なる色の光を出射する。
FIG. 3A shows another example of the HMD 200. In FIG. 3A, a plurality of light emitting elements 231 to 236 are provided for the HMD 200. These light emitting elements 231 to 236 are realized by LEDs, for example. The light emitting elements 231 to 234 are provided on the front side of the HMD 200, and the light emitting element 235 and the light emitting element 236 (not shown) are provided on the back side. These light emitting elements 231 to 236 emit (emit) light in a visible light band, for example. Specifically, the light emitting elements 231 to 236 emit light of different colors.
そして図3(B)に示す撮像部150を、ユーザUSの周囲の少なくとも1つの場所(例えば前方側、或いは前方側及び後方側など)に設置し、この撮像部150により、HMD200の発光素子231~236の光を撮像する。即ち、撮像部150の撮像画像には、これらの発光素子231~236のスポット光が映る。そして、この撮像画像の画像処理を行うことで、ユーザUSの頭部(HMD)のトラッキングを実現する。即ちユーザUSの頭部の3次元位置や向く方向(視点位置、視線方向)を検出する。
The imaging unit 150 shown in FIG. 3B is installed in at least one place around the user US (for example, the front side, the front side, the rear side, or the like). Image up to 236 lights. That is, spot images of these light emitting elements 231 to 236 are reflected in the captured image of the imaging unit 150. And the tracking of the user's US head (HMD) is implement | achieved by performing the image process of this captured image. That is, the three-dimensional position of the head of the user US and the facing direction (viewpoint position, line of sight direction) are detected.
例えば図3(B)に示すように撮像部150には第1、第2のカメラ151、152が設けられており、これらの第1、第2のカメラ151、152の第1、第2の撮像画像を用いることで、ユーザUSの頭部の奥行き方向での位置等が検出可能になる。またHMD200に設けられたモーションセンサのモーション検出情報に基づいて、ユーザUSの頭部の回転角度(視線)も検出可能になっている。従って、このようなHMD200を用いることで、ユーザUSが、周囲の360度の全方向うちのどの方向を向いた場合にも、それに対応する仮想空間(仮想3次元空間)での画像(ユーザの視点に対応する仮想カメラから見える画像)を、HMD200の表示部220に表示することが可能になる。
For example, as illustrated in FIG. 3B, the imaging unit 150 includes first and second cameras 151 and 152, and the first and second cameras 151 and 152 of the first and second cameras 151 and 152 are provided. By using the captured image, the position of the head of the user US in the depth direction can be detected. Further, based on the motion detection information of the motion sensor provided in the HMD 200, the rotation angle (line of sight) of the head of the user US can also be detected. Therefore, by using such an HMD 200, when the user US faces in any direction of all 360 degrees around the image, the image (the user's virtual space in the virtual space (virtual three-dimensional space)) Image viewed from a virtual camera corresponding to the viewpoint) can be displayed on the display unit 220 of the HMD 200.
なお、発光素子231~236として、可視光ではなく赤外線のLEDを用いてもよい。また、例えばデプスカメラ等を用いるなどの他の手法で、ユーザの頭部の位置や動き等を検出するようにしてもよい。
Note that as the light emitting elements 231 to 236, infrared LEDs instead of visible light may be used. Further, the position or movement of the user's head may be detected by another method such as using a depth camera.
また、ユーザの視点位置、視線方向(ユーザの位置、方向)を検出するトラッキング処理の手法は、図2(A)~図3(B)で説明した手法には限定されない。例えばHMD200に設けられたモーションセンサ等を用いて、HMD200の単体でトラッキング処理を実現してもよい。即ち、図2(B)のベースステーション280、284、図3(B)の撮像部150などの外部装置を設けることなく、トラッキング処理を実現する。或いは、公知のアイトラッキング、フェイストラッキング又はヘッドトラッキングなどの種々の視点トラッキング手法により、ユーザの視点位置、視線方向などの視点情報等を検出してもよい。
Also, the tracking processing method for detecting the user's viewpoint position and line-of-sight direction (user position and direction) is not limited to the method described with reference to FIGS. For example, the tracking process may be realized by a single unit of the HMD 200 using a motion sensor or the like provided in the HMD 200. That is, tracking processing is realized without providing external devices such as the base stations 280 and 284 in FIG. 2B and the imaging unit 150 in FIG. Or you may detect viewpoint information, such as a user's viewpoint position and a gaze direction, by various viewpoint tracking methods, such as well-known eye tracking, face tracking, or head tracking.
3.本実施形態の手法
次に本実施形態の手法について詳細に説明する。なお、以下では、本実施形態のシミュレーションシステムにより生成される画像がHMDに表示される場合を主に例にとり説明する。但し、シミュレーションシステムにより生成される画像が表示される表示部は、HMDには限定されず、家庭用ゲーム装置、業務用ゲーム装置又はPCで使用される通常のディスプレイであってもよい。また、以下では、特異点の通過の判定対象や仮想空間の位置情報の設定対象等が、ユーザ移動体又は仮想カメラのうちのユーザ移動体である場合を主に例にとり説明するが、特異点の通過の判定対象や仮想空間の位置情報の設定対象等は、仮想カメラであってもよい。また、以下では、ユーザに対応するユーザ移動体を、ユーザキャラクタと記載して説明を行う。また本実施形態の手法は、種々のゲーム(仮想体験ゲーム、対戦ゲーム、RPG、アクションゲーム、競争ゲーム、スポーツゲーム、ホラー体験ゲーム、電車や飛行機等の乗り物のシミュレーションゲーム、パズルゲーム、コミュニケーションゲーム、或いは音楽ゲーム等)に適用でき、ゲーム以外にも適用可能である。 3. Next, the method of this embodiment will be described in detail. In the following description, a case where an image generated by the simulation system of the present embodiment is displayed on the HMD will be mainly described as an example. However, the display unit on which the image generated by the simulation system is displayed is not limited to the HMD, and may be a normal display used in a home game device, a business game device, or a PC. Further, in the following, the case where the determination target of the passage of the singular point, the setting target of the position information of the virtual space, etc. is mainly a user moving body or a user moving body of the virtual camera will be described as an example. The determination target of the passage of the object, the setting object of the position information of the virtual space, and the like may be a virtual camera. In the following description, the user moving object corresponding to the user is described as a user character. In addition, the method of the present embodiment includes various games (virtual experience game, battle game, RPG, action game, competition game, sports game, horror experience game, simulation game of vehicles such as trains and airplanes, puzzle games, communication games, Alternatively, it can be applied to music games and the like, and can be applied to other than games.
次に本実施形態の手法について詳細に説明する。なお、以下では、本実施形態のシミュレーションシステムにより生成される画像がHMDに表示される場合を主に例にとり説明する。但し、シミュレーションシステムにより生成される画像が表示される表示部は、HMDには限定されず、家庭用ゲーム装置、業務用ゲーム装置又はPCで使用される通常のディスプレイであってもよい。また、以下では、特異点の通過の判定対象や仮想空間の位置情報の設定対象等が、ユーザ移動体又は仮想カメラのうちのユーザ移動体である場合を主に例にとり説明するが、特異点の通過の判定対象や仮想空間の位置情報の設定対象等は、仮想カメラであってもよい。また、以下では、ユーザに対応するユーザ移動体を、ユーザキャラクタと記載して説明を行う。また本実施形態の手法は、種々のゲーム(仮想体験ゲーム、対戦ゲーム、RPG、アクションゲーム、競争ゲーム、スポーツゲーム、ホラー体験ゲーム、電車や飛行機等の乗り物のシミュレーションゲーム、パズルゲーム、コミュニケーションゲーム、或いは音楽ゲーム等)に適用でき、ゲーム以外にも適用可能である。 3. Next, the method of this embodiment will be described in detail. In the following description, a case where an image generated by the simulation system of the present embodiment is displayed on the HMD will be mainly described as an example. However, the display unit on which the image generated by the simulation system is displayed is not limited to the HMD, and may be a normal display used in a home game device, a business game device, or a PC. Further, in the following, the case where the determination target of the passage of the singular point, the setting target of the position information of the virtual space, etc. is mainly a user moving body or a user moving body of the virtual camera will be described as an example. The determination target of the passage of the object, the setting object of the position information of the virtual space, and the like may be a virtual camera. In the following description, the user moving object corresponding to the user is described as a user character. In addition, the method of the present embodiment includes various games (virtual experience game, battle game, RPG, action game, competition game, sports game, horror experience game, simulation game of vehicles such as trains and airplanes, puzzle games, communication games, Alternatively, it can be applied to music games and the like, and can be applied to other than games.
3.1 ゲームの説明
まず本実施形態により実現されるVR(バーチャルリアリティ)のゲームについて説明する。このゲームは、複数の仮想空間(仮想世界)を移動可能な仮想現実の体験ゲームである。 3.1 Game Description First, a VR (virtual reality) game realized by the present embodiment will be described. This game is a virtual reality experience game that can move in a plurality of virtual spaces (virtual worlds).
まず本実施形態により実現されるVR(バーチャルリアリティ)のゲームについて説明する。このゲームは、複数の仮想空間(仮想世界)を移動可能な仮想現実の体験ゲームである。 3.1 Game Description First, a VR (virtual reality) game realized by the present embodiment will be described. This game is a virtual reality experience game that can move in a plurality of virtual spaces (virtual worlds).
図4、図5は本実施形態のシミュレーションシステムで用いられる部屋のプレイフィールドFLの説明図である。図4はプレイフィールドFLを説明する斜視図であり、図5は上面図である。
4 and 5 are explanatory diagrams of the play field FL in the room used in the simulation system of this embodiment. FIG. 4 is a perspective view illustrating the play field FL, and FIG. 5 is a top view.
部屋を模したプレイフィールドFL(プレイエリア、プレイ空間)には、ドアDR、机DK、本棚BSなどが配置され、壁には窓WD1、WD2が設けられている。ユーザは、この部屋を模したプレイフィールドFLに入室して、VRのゲームを楽しむ。図4では、複数のユーザUS1、US2が入室しており、これらの2人のユーザUS1、US2によりVRのゲームを楽しめるようになっている。
In the play field FL (play area, play space) simulating a room, a door DR, a desk DK, a bookshelf BS, etc. are arranged, and windows WD1, WD2 are provided on the wall. The user enters the play field FL simulating this room and enjoys a VR game. In FIG. 4, a plurality of users US1 and US2 enter the room, and these two users US1 and US2 can enjoy a VR game.
各ユーザUS1、US2は、例えば処理装置(バックパックPC)を背中に装着しており、この処理装置により生成された画像がHMD1、HMD2(頭部装着型表示装置)に表示される。またプレイフィールドFLには、不図示の管理用の処理装置が配置されており、この管理用の処理装置により、ユーザUS1、US2が装着する処理装置間のデータの同期処理(通信処理)などが行われる。例えばユーザUS1のHMD1にユーザUS2に対応するユーザキャラクタ(広義にはユーザ移動体)を表示し、ユーザUS2のHMD2にユーザUS1に対応するユーザキャラクタを表示するための同期処理などが行われる。また体感装置の制御処理などを行うこともできる。また部屋にはオペレータが待機しており、管理用の処理装置の操作や、ユーザUS1、US2のHMD1、HMD2やジャケットの装着の手伝いや、ゲーム進行のためのオペレーション作業や誘導作業などを行う。
Each user US1, US2 wears a processing device (backpack PC) on his / her back, for example, and an image generated by this processing device is displayed on HMD1, HMD2 (head-mounted display device). Further, a management processing device (not shown) is arranged in the play field FL, and the management processing device performs data synchronization processing (communication processing) between the processing devices worn by the users US1 and US2. Done. For example, a synchronization process for displaying a user character corresponding to the user US2 (user moving body in a broad sense) on the HMD1 of the user US1 and displaying a user character corresponding to the user US1 on the HMD2 of the user US2 is performed. In addition, control processing of the sensation apparatus can be performed. In addition, an operator is waiting in the room, and performs operations of management processing devices, helping the users US1 and US2 to install the HMD1, HMD2 and jackets, operation work and guidance work for game progress.
またプレイフィールドFLの部屋には、図2(B)で説明したベースステーション280、284が設置されており、これらのベースステーション280、284を用いて、ユーザUS1、US2の位置情報の取得が可能になっている。また図5に示すように、プレイフィールドFLには、ユーザに仮想現実を体験させるための体感装置である送風機BLや振動デバイスVBが設置されている。振動デバイスVBは、例えば部屋の床下に設置されるトランスデューサなどにより実現される。
In addition, the base stations 280 and 284 described with reference to FIG. 2B are installed in the room of the play field FL, and the location information of the users US1 and US2 can be acquired using these base stations 280 and 284. It has become. Further, as shown in FIG. 5, the play field FL is provided with a blower BL and a vibration device VB, which are sensation devices for allowing the user to experience virtual reality. The vibration device VB is realized by, for example, a transducer installed under the floor of a room.
本実施形態のゲームによれば、実世界のドアDRを開いたときに、そのドアDRの向こう側に別世界(部屋の風景とは異なる世界)が広がる。そしてドアDRをくぐって当該別世界に行くことができる仮想現実を、ユーザは体験できる。
According to the game of this embodiment, when the real-world door DR is opened, another world (a world different from the room landscape) spreads beyond the door DR. The user can experience a virtual reality that can pass through the door DR and go to another world.
例えば図4においてユーザUS1、US2のHMD1、HMD2には、部屋に対応する第1の仮想空間の画像が表示されている。具体的には、第1の仮想空間(第1のオブジェクト空間)に、部屋に配置・設置される物体に対応するオブジェクトを配置する。例えばドアDR、机DK、本棚BS、窓WD1、WD2に対応するオブジェクトを配置する。そして、この第1の仮想空間において、ユーザUS1、US2の視点(第1、第2の視点)に対応する仮想カメラ(第1、第2の仮想カメラ)から見える画像を生成して、HMD1、HMD2(第1、第2の表示部)に表示する。このようにすれば、HMD1、HMD2を装着して移動するユーザUS1、US2は、あたかも本物の部屋を歩き回っているような仮想現実を体験できる。
For example, in FIG. 4, the images of the first virtual space corresponding to the room are displayed on the HMD1 and HMD2 of the users US1 and US2. Specifically, an object corresponding to an object placed / installed in a room is placed in the first virtual space (first object space). For example, objects corresponding to the door DR, the desk DK, the bookshelf BS, and the windows WD1 and WD2 are arranged. And in this 1st virtual space, the image seen from the virtual camera (1st, 2nd virtual camera) corresponding to the viewpoint (1st, 2nd viewpoint) of user US1, US2 is produced | generated, HMD1, Display on HMD2 (first and second display units). In this way, the users US1 and US2 who move while wearing the HMD1 and HMD2 can experience virtual reality as if they were walking around a real room.
そして図6(A)に示すように、1回目にユーザ(US1、US2)がドアDRを開いた場合には、ドアDRの向こう側は部屋のままであり、ドアDRの領域(広義には特異点に対応する領域)には、部屋の画像が表示される。そして図6(B)に示すようにユーザがドアDRを閉めた後、図6(C)に示すように、再度、ドアDRを開くと、ドアDRの向こう側は氷の国に変化する。即ち図7に示すように、ドアDRの領域(ドアの開口領域)には、第2の仮想空間VS2(第2のオブジェクト空間)の画像である氷の国の画像が表示される。このとき、図7に示すように、ドアDRの回りには、本棚BS、窓WD1、WD2などの部屋の画像が表示される。即ち、ドアDRの領域(特異点に対応する領域)には、第2の仮想空間VS2の画像である氷の国の画像が表示される一方で、ドアDR以外の領域には、第1の仮想空間VS1の画像である部屋の画像が表示される。
As shown in FIG. 6A, when the user (US1, US2) opens the door DR for the first time, the other side of the door DR remains in the room, and the area of the door DR (in a broad sense) In the region corresponding to the singular point, a room image is displayed. Then, after the user closes the door DR as shown in FIG. 6B, when the door DR is opened again as shown in FIG. 6C, the other side of the door DR changes to an ice country. That is, as shown in FIG. 7, an image of the ice country that is an image of the second virtual space VS2 (second object space) is displayed in the region of the door DR (door opening region). At this time, as shown in FIG. 7, an image of a room such as a bookshelf BS and windows WD1 and WD2 is displayed around the door DR. That is, an image of the ice country which is an image of the second virtual space VS2 is displayed in the region of the door DR (region corresponding to the singular point), while the first region is displayed in the region other than the door DR. A room image which is an image of the virtual space VS1 is displayed.
そして例えばユーザUS1がドアDRを通過して(特異点に対応する場所を通過して)、ドアDRの向こう側に移動すると、図8に示すように、ユーザUS1に対応するユーザキャラクタUC1(広義にはユーザ移動体)は、第1の仮想空間VS1である部屋から、第2の仮想空間VS2である氷の国へと移動することになる。ここで、ユーザキャラクタUC1は、実空間でのユーザUS1の移動に伴って仮想空間で移動するキャラクタ(表示物)であり、アバターとも呼ばれる。
Then, for example, when the user US1 passes through the door DR (passes through the place corresponding to the singular point) and moves beyond the door DR, as shown in FIG. 8, the user character UC1 (in a broad sense) corresponding to the user US1 The user moving body) moves from the room that is the first virtual space VS1 to the ice country that is the second virtual space VS2. Here, the user character UC1 is a character (display object) that moves in the virtual space as the user US1 moves in the real space, and is also called an avatar.
なお、本実施形態では、ユーザUS1の実空間での位置情報を取得し、取得された位置情報に基づいてユーザキャラクタUC1を仮想空間(第1、第2の仮想空間)で移動させる場合について主に説明するが、本実施形態はこれに限定されない。例えば図1の操作部160(ゲームコントローラ等)からの操作情報等に基づいて、ユーザキャラクタUC1を仮想空間(第1、第2の仮想空間)で移動させてもよい。また、ユーザUS1(US2)の手や指の動きは、例えばリープモーションの処理により検出される。そして、この検出結果に基づいて、ユーザUS1に対応するユーザキャラクタUC1の手や指の部位(パーツオブジェクト)を動かすモーション処理が行われる。これにより、ユーザUS1は、例えばドアDRを開けるときに、自身の手や指の動きを、ユーザキャラクタUC1の手や指の部位の動きを見ることで視覚的に認識できるようになる。
Note that the present embodiment mainly deals with the case where the position information of the user US1 in the real space is acquired and the user character UC1 is moved in the virtual space (first and second virtual spaces) based on the acquired position information. However, the present embodiment is not limited to this. For example, the user character UC1 may be moved in the virtual space (first and second virtual spaces) based on operation information from the operation unit 160 (game controller or the like) in FIG. Further, the movement of the hand or finger of the user US1 (US2) is detected by, for example, a leap motion process. And based on this detection result, the motion process which moves the site | part (part object) of the hand and finger | toe of user character UC1 corresponding to user US1 is performed. Thereby, for example, when the user US1 opens the door DR, the user US1 can visually recognize the movement of his or her hand or finger by looking at the movement of the hand or finger part of the user character UC1.
図8では、HMD1を装着するユーザUS1(UC1)の視界の全周囲に亘って、広大なVR空間が広がるようになるため、ユーザUS1の仮想現実感を格段に向上できる。即ち、ユーザUS1は、あたかも現実の氷の国にいるような仮想現実を感じることができる。そしてユーザUS1に対応するユーザキャラクタUC1は、この第2の仮想空間である氷の国を自由に歩き回ることができる。
In FIG. 8, since a vast VR space spreads over the entire perimeter of the field of view of the user US1 (UC1) wearing the HMD1, the virtual reality of the user US1 can be significantly improved. That is, the user US1 can feel a virtual reality as if he were in a real ice country. The user character UC1 corresponding to the user US1 can freely walk around the ice country that is the second virtual space.
図9は、図8のように第2の仮想空間VS2である氷の国に移動したユーザキャラクタUC1が、ドアDR側に振り返った場合に表示される画像の例である。図9に示すように、ドアDRの領域には、第1の仮想空間VS1の画像である部屋の画像が表示される。このとき、図9に示すように、ドアDRの回りには、第2の仮想空間VS2の画像である氷の国の画像が表示される。即ち、ドアDRの領域には、第1の仮想空間VS1の画像である部屋の画像が表示される一方で、ドアDR以外の領域には、第2の仮想空間VS2の画像である氷の国の画像が表示される。そしてユーザキャラクタUC1がドアDRの方に移動し、ドアDRを再度、通過すると、ユーザキャラクタUC1は、第1の仮想空間VS1である部屋の中に戻ることができる。即ち、図7においてドアDRを第1の進行方向で通過することで、第1の仮想空間VS1(部屋)から第2の仮想空間VS2(氷の国)に移動(ワープ)できる。一方、図9において、ドアDRを第1の進行方向とは異なる第2の進行方向で通過することで、第2の仮想空間VS2(氷の国)から第1の仮想空間VS1(部屋)に移動できる。即ち、ユーザキャラクタUC1は、第1の仮想空間VS1と第2の仮想空間VS2を、ドアDR(特異点に対応する場所)を介して自由に行き来できるようになる。
FIG. 9 is an example of an image displayed when the user character UC1 that has moved to the ice country as the second virtual space VS2 looks back to the door DR side as shown in FIG. As illustrated in FIG. 9, an image of a room that is an image of the first virtual space VS <b> 1 is displayed in the region of the door DR. At this time, as shown in FIG. 9, an image of the ice country that is an image of the second virtual space VS2 is displayed around the door DR. That is, an image of a room that is an image of the first virtual space VS1 is displayed in the area of the door DR, while an area of the ice that is an image of the second virtual space VS2 is displayed in an area other than the door DR. Is displayed. When the user character UC1 moves toward the door DR and passes through the door DR again, the user character UC1 can return to the room that is the first virtual space VS1. That is, by passing the door DR in the first advancing direction in FIG. 7, it is possible to move (warp) from the first virtual space VS1 (room) to the second virtual space VS2 (ice country). On the other hand, in FIG. 9, by passing through the door DR in the second traveling direction different from the first traveling direction, the second virtual space VS2 (land of ice) changes to the first virtual space VS1 (room). Can move. That is, the user character UC1 can freely go back and forth between the first virtual space VS1 and the second virtual space VS2 through the door DR (a place corresponding to a singular point).
図10(A)、図10(B)は、ユーザUS1、US2がドアDRをくぐって通過した時の様子を示している。このとき、ユーザUS1、US2に対応するユーザキャラクタUC1、UC2は、第2の仮想空間である氷の国に位置している。即ち、ユーザキャラクタUC1、UC2の位置情報は、第2の仮想空間の位置情報として設定される。
FIG. 10A and FIG. 10B show a situation when the users US1 and US2 pass through the door DR. At this time, the user characters UC1 and UC2 corresponding to the users US1 and US2 are located in the ice country that is the second virtual space. That is, the position information of the user characters UC1 and UC2 is set as the position information of the second virtual space.
そしてユーザUS1、US2の近くには、体感装置である送風機BLや振動デバイスVBが設置されている。送風機BLは、ユーザUS1、US2の正面方向側に設置され、振動デバイスVBはユーザUS1、US2の周囲の床下に設置されている。そしてユーザUS1、US2がドアDRを通過してドアDRの向こう側に移動すると、送風機BLが送風を開始し、ユーザUS1、US2に対して風(冷風)があたるようになる。また吹雪が「ヒュー」となるような音が、ユーザUS1、US2が装着するヘッドホンのスピーカから出力される。これにより、ユーザUS1、US2は、あたかも本当の氷の国に来たような仮想現実を感じることができる。
In the vicinity of the users US1 and US2, a blower BL and a vibration device VB, which are sensation devices, are installed. The blower BL is installed on the front side of the users US1 and US2, and the vibration device VB is installed under the floor around the users US1 and US2. When the users US1 and US2 pass through the door DR and move to the other side of the door DR, the blower BL starts to blow air, and the users US1 and US2 are exposed to wind (cold wind). Also, a sound that makes the snowstorm “hue” is output from the speaker of the headphones worn by the users US1 and US2. Thereby, the users US1 and US2 can feel a virtual reality as if they came to a real ice country.
またユーザUS1、US2が振動デバイスVBの近くに移動すると、図10(A)のA1、図10(B)のA2に示すように、第2の仮想空間での氷河が崩れるゲーム演出が行われる。即ち、図10(A)のA1で存在していた氷河が、図10(B)のA2では崩れて存在しなくなる。具体的にはユーザUS1、US2のHMD1、HMD2には、氷河が崩れる様子を表す画像が表示され、氷河が崩れる音が、ユーザUS1、US2が装着するヘッドホンのスピーカから出力される。これにより、氷河が崩れて危ないと感じるようなスリルをユーザUS1、US2に体感させることができる。
Further, when the users US1 and US2 move close to the vibrating device VB, as shown in A1 of FIG. 10A and A2 of FIG. 10B, a game effect is performed in which the glacier in the second virtual space collapses. . That is, the glacier that existed at A1 in FIG. 10A collapses and disappears at A2 in FIG. 10B. Specifically, an image representing how the glacier collapses is displayed on the HMD1 and HMD2 of the users US1 and US2, and the sound of the glacier collapse is output from the headphones speaker worn by the users US1 and US2. As a result, the users US1 and US2 can experience a thrill that the glacier collapses and feels dangerous.
なお、図10(A)のA3、図10(B)のA4に示すように、ユーザUS1、US2が移動可能な範囲は、部屋エリアと同様であり、図4の部屋の壁を越えて移動することはできない。そして例えばユーザUS1、US2が、部屋の壁の近くに移動すると、例えばHMD1、HMD2に対して壁等への衝突を警告する報知情報が表示され、壁等への衝突を回避できるようにしている。
As shown in A3 in FIG. 10A and A4 in FIG. 10B, the range in which the users US1 and US2 can move is the same as the room area, and moves beyond the room wall in FIG. I can't do it. For example, when the users US1 and US2 move close to the wall of the room, for example, notification information for warning the HMD1 and HMD2 of a collision with the wall or the like is displayed so that the collision with the wall or the like can be avoided. .
以上のように本実施形態のシミュレーションシステムによれば、図7に示すように、ユーザがドアを開けた先には、氷の国の景色が広がり、太陽と氷の白さで、部屋とは全く違う風景を見ることができる。また眩しさと共に風が吹く音がすることで、仮想現実感が向上する。
As described above, according to the simulation system of the present embodiment, as shown in FIG. 7, the scenery of the ice country spreads at the tip of the user opening the door, the sun and ice whiteness, You can see a completely different landscape. In addition, the sound of wind blowing with dazzlingness improves virtual reality.
そしてユーザは、ドアの先に足を踏み入れることができる。氷の国の風景、音、白い息に加え、実際に送風機からの風を感じることによって、視覚、聴覚、体感で寒さを体感できる。ドアの先に移動すると、見渡す限り氷の国であり、狭い部屋から一気に広い世界に行く不思議な感覚を味わうことができる。またユーザは図8において、周りを見渡して、海や氷山、澄んだ空の太陽やダイヤモンドダストを見て、氷の国の景色を味わえる。また図10(A)、図10(B)に示すように、ドアを出た先の一方は、断崖絶壁の先に海が広がっており、ユーザはその高さに肝を冷やす。そして、ユーザが崖に近づいたり、所定時間が経過すると、振動デバイスにより足元が振動して、目の前の崖が崩れることで、恐怖体験をユーザは味わえる。ユーザは、頻繁に2つの仮想空間(仮想世界)を行ったり来たりしたり、一人がドアの先に行って、もう一人がドアを通らず後ろに回って相手が見えないことを確認したり、ドアの真ん中に横向きに立って、右半分と左半分で視界や音が違うという体験をすることができる。
And the user can step into the end of the door. In addition to the scenery, sound, and white breath of the icy country, you can feel the cold with visual, auditory, and bodily sensations by actually feeling the wind from the blower. If you move beyond the door, you can experience the mysterious sensation of going from a small room to a wide world at once. In FIG. 8, the user can look around and see the sea, iceberg, clear sky sun and diamond dust, and enjoy the scenery of the ice country. Also, as shown in FIGS. 10A and 10B, the sea extends beyond the cliff at one end of the door, and the user cools the liver to that height. When the user approaches the cliff or a predetermined time elapses, the user can enjoy a horror experience because the foot is vibrated by the vibration device and the cliff in front of the user collapses. The user frequently goes back and forth between two virtual spaces (virtual worlds), and one person goes to the front of the door, and the other turns around behind the door and confirms that the other party cannot be seen. Standing sideways in the middle of the door, you can experience the difference in field of view and sound between the right and left halves.
また図6(C)のようにドアDRの領域に氷の国が表示された後、図6(D)に示すようにドアDRを閉め、図6(E)に示すように、再度、ドアDRを開くと、ドアDRの向こう側は電車の上の空間に変化する。即ち図11(A)に示すように、ドアDRの領域には、第3の仮想空間VS3の画像である電車TRの上にいるときの画像が表示される。このとき、図11(A)に示すように、ドアDR以外の領域には、本棚BS、窓WD1、WD2などの部屋の画像が表示される。ユーザUS1がドアDRをくぐって、ドアDRの向こう側に移動すると、ユーザキャラクタUC1が電車TRの上に乗って、その眼前にトンネルTNが迫ってくるスリル溢れる画像が表示されるようになる。
Further, after the ice country is displayed in the area of the door DR as shown in FIG. 6C, the door DR is closed as shown in FIG. 6D, and again as shown in FIG. 6E. When DR is opened, the other side of the door DR changes to the space above the train. That is, as shown in FIG. 11A, an image when the user is on the train TR, which is an image of the third virtual space VS3, is displayed in the area of the door DR. At this time, as shown in FIG. 11A, images of rooms such as the bookshelf BS, the windows WD1, and WD2 are displayed in an area other than the door DR. When the user US1 passes through the door DR and moves beyond the door DR, the user character UC1 gets on the train TR, and a thrilling image in which the tunnel TN approaches is displayed in front of the user.
なお、ドアを少しだけ開けて、直ぐに閉めたような場合には、仮想空間の切替処理を行わないようにしてもよい。例えばユーザによっては、図11(A)に示すような画像が表示された場合に、驚いて直ぐにドアを閉めてしまう場合がある。このような場合に次にドアを開けた時に、図11(A)に示すような電車の場面の画像が表示されないのは、望ましくないからである。
Note that the virtual space switching process may not be performed when the door is opened slightly and then closed immediately. For example, depending on the user, when an image as shown in FIG. 11A is displayed, the user may be surprised and immediately close the door. In such a case, it is not desirable that the image of the train scene as shown in FIG. 11A is not displayed when the door is opened next time.
電車TRの上に移動したユーザキャラクタUC1が、ドアDRの方を振り返ると、図12に示すような画像が表示される。図12に示すように、ドアDRの領域には、第1の仮想空間VS1の画像である部屋の画像が表示される。このとき、図12に示すように、ドアDRの回りには、第3の仮想空間VS3の画像である電車の上での画像(トンネル、電車等の画像)が表示される。即ち、ドアDRの領域には、第1の仮想空間VS1の画像である部屋の画像が表示される一方で、ドアDR以外の領域には、第3の仮想空間VS3の画像である電車の上での画像が表示される。また例えばドアDRの上の端部がトンネルTNの屋根に当たって火花が散るような演出画像が表示される。そしてユーザキャラクタUC1がドアDRの方に移動し、ドアDRを再度、通過すると、ユーザキャラクタUC1は、第1の仮想空間VS1である部屋の中に戻ることができる。なお図6(A)~図6(E)では、ドアDRが開閉される毎に、部屋、氷の国、電車の上というように、ドアDRの向こう側の仮想空間が切り替わるようになっている。
When the user character UC1 moving on the train TR looks back toward the door DR, an image as shown in FIG. 12 is displayed. As illustrated in FIG. 12, an image of a room that is an image of the first virtual space VS <b> 1 is displayed in the region of the door DR. At this time, as shown in FIG. 12, an image on a train (an image of a tunnel, a train, etc.) that is an image of the third virtual space VS3 is displayed around the door DR. That is, an image of a room that is an image of the first virtual space VS1 is displayed in the area of the door DR, while an area above the train that is an image of the third virtual space VS3 is displayed in an area other than the door DR. The image at is displayed. In addition, for example, an effect image is displayed in which sparks are scattered when the upper end of the door DR hits the roof of the tunnel TN. When the user character UC1 moves toward the door DR and passes through the door DR again, the user character UC1 can return to the room that is the first virtual space VS1. 6A to 6E, every time the door DR is opened and closed, the virtual space on the other side of the door DR is switched, such as a room, an icy country, or a train. Yes.
以上のように本実施形態のシミュレーションシステムによれば、速くて危ない場所に突然放り込まれるようなスリルな体験をユーザは味わうことができる。電車はトンネルに入ったり出たりを繰り返し、ドアの上の端部がトンネルの屋根に当たって火花が散るため、スリル感が更に向上する。床の振動で本当に電車の上にいるよう感覚を与えることができると共に、送風機による送風でスピード感を与えることができる。
As described above, according to the simulation system of the present embodiment, the user can enjoy a thrilling experience that is suddenly thrown into a fast and dangerous place. The train repeats entering and exiting the tunnel, and the top edge of the door hits the tunnel roof, sparks are scattered and the thrill feeling is further improved. The vibration of the floor can give you the feeling of being really on the train, and you can give a feeling of speed with the blower.
即ち、ユーザは、ドアを開けた瞬間には、「ここはどこ?」というように混乱する。更に、自分は止まっているのに、ドアの先が動いているので、感覚が混乱する不思議な体験ができる。ドアの先に足を踏み入れると、送風機により猛烈な風が吹き、振動デバイスにより足に電車の振動が伝わり、本当に電車の上にいるとしか思えない感覚を味わえる。また目の前に猛烈なスピードでトンネルが迫ってきて、ぶつかってしまうというスリル感を味わえる。トンネル通過中は、頭上にトンネルの天井が迫って来て、ドアの上の端部が天井に接触して火花が上がることで、恐怖感が倍増する。トンネルを出ても、またトンネルが迫ってくるので、何度も繰り返しスリルを味わうことができる。
That is, the user is confused as “Where is this?” At the moment when the user opens the door. In addition, you can have a strange experience of feeling confused because the end of the door is moving even though you are stopped. When you step into the door, you can feel the sensation of being really on the train. In addition, you can experience the thrill of a tunnel approaching at a tremendous speed in front of you. While passing through the tunnel, the tunnel ceiling approaches, and the upper edge of the door touches the ceiling and sparks rise, doubling your fear. Even if you leave the tunnel, the tunnel will approach again, so you can enjoy the thrills over and over again.
3.2 仮想空間の切替処理
次に本実施形態の手法について詳細に説明する。本実施形態では、仮想空間として、第1の仮想空間と、第1の仮想空間に対して特異点を介して結びつけられる第2の仮想空間が設定される。図4~図10(B)で説明したように、第1の仮想空間は、例えば部屋に対応する仮想空間であり、第2の仮想空間は、氷の国に対応する仮想空間である。これらの第1、第2の仮想空間は、ドアDRに対応する特異点を介して結びつけられている。例えばドアDRを介して第1、第2の仮想空間の間で行き来ができるようになっている。前述しように、特異点は、基準である法則の下、その法則が適用できなくなるポイントである。例えば第1の仮想空間(部屋)でユーザキャラクタ(又は仮想カメラ。以下、同様)が移動している場合に、当該ユーザキャラクタに対しては、第1の仮想空間内だけにおいて移動するという法則が適用される。本実施形態の特異点は、このような法則(基準)が適用されず、当該法則に従わなくなるポイントである。例えば本実施形態では、特異点の通過により切替条件が成立すると、第1の仮想空間内で移動するという法則がユーザキャラクタに対して適用されなくなり、当該ユーザキャラクタは、第1の仮想空間とは異なる第2の仮想空間(氷の国)で移動するようになる。 3.2 Virtual Space Switching Processing Next, the method of this embodiment will be described in detail. In the present embodiment, the first virtual space and the second virtual space that is linked to the first virtual space via a singular point are set as the virtual space. As described with reference to FIGS. 4 to 10B, the first virtual space is, for example, a virtual space corresponding to a room, and the second virtual space is a virtual space corresponding to an ice country. These first and second virtual spaces are connected via a singular point corresponding to the door DR. For example, it is possible to move between the first and second virtual spaces via the door DR. As described above, the singularity is a point at which the law cannot be applied under the law that is the reference. For example, when a user character (or a virtual camera, hereinafter the same) is moving in a first virtual space (room), the rule that the user character moves only in the first virtual space is Applied. The singular point of the present embodiment is a point where such a law (standard) is not applied and the law is not followed. For example, in this embodiment, when the switching condition is established by passing a singular point, the law of moving in the first virtual space is not applied to the user character, and the user character is defined as the first virtual space. It moves in a different second virtual space (Iceland).
次に本実施形態の手法について詳細に説明する。本実施形態では、仮想空間として、第1の仮想空間と、第1の仮想空間に対して特異点を介して結びつけられる第2の仮想空間が設定される。図4~図10(B)で説明したように、第1の仮想空間は、例えば部屋に対応する仮想空間であり、第2の仮想空間は、氷の国に対応する仮想空間である。これらの第1、第2の仮想空間は、ドアDRに対応する特異点を介して結びつけられている。例えばドアDRを介して第1、第2の仮想空間の間で行き来ができるようになっている。前述しように、特異点は、基準である法則の下、その法則が適用できなくなるポイントである。例えば第1の仮想空間(部屋)でユーザキャラクタ(又は仮想カメラ。以下、同様)が移動している場合に、当該ユーザキャラクタに対しては、第1の仮想空間内だけにおいて移動するという法則が適用される。本実施形態の特異点は、このような法則(基準)が適用されず、当該法則に従わなくなるポイントである。例えば本実施形態では、特異点の通過により切替条件が成立すると、第1の仮想空間内で移動するという法則がユーザキャラクタに対して適用されなくなり、当該ユーザキャラクタは、第1の仮想空間とは異なる第2の仮想空間(氷の国)で移動するようになる。 3.2 Virtual Space Switching Processing Next, the method of this embodiment will be described in detail. In the present embodiment, the first virtual space and the second virtual space that is linked to the first virtual space via a singular point are set as the virtual space. As described with reference to FIGS. 4 to 10B, the first virtual space is, for example, a virtual space corresponding to a room, and the second virtual space is a virtual space corresponding to an ice country. These first and second virtual spaces are connected via a singular point corresponding to the door DR. For example, it is possible to move between the first and second virtual spaces via the door DR. As described above, the singularity is a point at which the law cannot be applied under the law that is the reference. For example, when a user character (or a virtual camera, hereinafter the same) is moving in a first virtual space (room), the rule that the user character moves only in the first virtual space is Applied. The singular point of the present embodiment is a point where such a law (standard) is not applied and the law is not followed. For example, in this embodiment, when the switching condition is established by passing a singular point, the law of moving in the first virtual space is not applied to the user character, and the user character is defined as the first virtual space. It moves in a different second virtual space (Iceland).
具体的には本実施形態では、仮想空間についての切替条件が成立する前においては、ユーザキャラクタ(又は仮想カメラ)の位置情報を、第1の仮想空間の位置情報として設定する。例えば図13(A)において、ユーザキャラクタUC1の位置P1の情報は、第1の仮想空間VS1(部屋)の位置情報として設定される。そして第1の描画処理として、少なくとも第1の仮想空間VS1(部屋)の画像を描画する処理が行われる。即ち、第1の仮想空間VS1において、仮想カメラ(ユーザの視点)から見える画像が生成される。
Specifically, in the present embodiment, before the switching condition for the virtual space is established, the position information of the user character (or virtual camera) is set as the position information of the first virtual space. For example, in FIG. 13A, information on the position P1 of the user character UC1 is set as position information on the first virtual space VS1 (room). And as a 1st drawing process, the process which draws the image of the 1st virtual space VS1 (room) at least is performed. That is, in the first virtual space VS1, an image that can be seen from the virtual camera (user's viewpoint) is generated.
そして本実施形態では、図7で説明したように、この第1の描画処理において、第1の仮想空間VS1(部屋)の画像に加えて、第2の仮想空間VS2(氷の国)の画像を描画する処理を行って、特異点に対応する領域であるドアDRの領域に、第2の仮想空間VS2の画像が表示される画像を生成する。即ち図7に示すように、ドアDRの領域には、氷の国の画像が表示され、ドアDR以外の領域には、部屋の画像が表示されるような画像が生成される。
In the present embodiment, as described with reference to FIG. 7, in the first drawing process, in addition to the image of the first virtual space VS1 (room), the image of the second virtual space VS2 (ice country) Is generated to generate an image in which the image of the second virtual space VS2 is displayed in the region of the door DR, which is a region corresponding to the singular point. That is, as shown in FIG. 7, an image is generated in which an image of the ice country is displayed in the area of the door DR and a room image is displayed in the area other than the door DR.
一方、図13(A)に示すように、ユーザキャラクタUC1(仮想カメラ)が、特異点(SG)に対応する場所であるドアDRの場所を、第1の進行方向D1で通過することで切替条件が成立したとする。即ち図13(A)では、ユーザキャラクタUC1が、位置P1からドアDRを通過して、位置P2に移動することで、切替条件が成立している。この場合にはユーザキャラクタUC1(仮想カメラ)の位置情報を、第2の仮想空間VS2(氷の国)の位置情報として設定する。例えば図13(A)において、ユーザキャラクタUC1の位置P2、P3の情報は、第2の仮想空間VS2(氷の国)の位置情報として設定される。そして第2の描画処理として、少なくとも第2の仮想空間VS2の画像を描画する処理が行われる。即ち、第2の仮想空間VS2において、仮想カメラ(ユーザ視点)から見える画像が生成される。
On the other hand, as shown in FIG. 13A, the user character UC1 (virtual camera) is switched by passing the location of the door DR corresponding to the singular point (SG) in the first traveling direction D1. Suppose the condition is met. That is, in FIG. 13A, the switching condition is established when the user character UC1 moves from the position P1 through the door DR to the position P2. In this case, the position information of the user character UC1 (virtual camera) is set as the position information of the second virtual space VS2 (ice country). For example, in FIG. 13A, information on the positions P2 and P3 of the user character UC1 is set as position information on the second virtual space VS2 (ice country). And as a 2nd drawing process, the process which draws the image of the 2nd virtual space VS2 at least is performed. That is, an image that can be seen from the virtual camera (user viewpoint) is generated in the second virtual space VS2.
そして本実施形態では、図9で説明したように、この第2の描画処理において、第2の仮想空間VS2(氷の国)の画像に加えて、第1の仮想空間VS1(部屋)の画像を描画する処理を行って、特異点に対応する領域であるドアDRの領域に、第1の仮想空間VS1の画像が表示される画像を生成する。即ち図9に示すように、ドアDRの領域には、部屋の画像が表示され、ドアDR以外の領域には、氷の国の画像が表示されるような画像が生成される。
In the present embodiment, as described with reference to FIG. 9, in the second drawing process, in addition to the image of the second virtual space VS <b> 2 (ice country), the image of the first virtual space VS <b> 1 (room). Is generated to generate an image in which the image of the first virtual space VS1 is displayed in the region of the door DR, which is a region corresponding to the singular point. That is, as shown in FIG. 9, an image is generated in which an image of a room is displayed in the area of the door DR and an image of the ice country is displayed in an area other than the door DR.
更に具体的には図13(A)に示すように、ユーザキャラクタUC1(仮想カメラ)がドアDRの場所(特異点SGに対応する場所)を第1の進行方向D1で通過し、仮想カメラの視線方向SLが、第1の進行方向D1の反対方向側を向いたとする。即ち図13(A)では、ユーザキャラクタUC1が第1の進行方向D1でドアDRを通過した後、ドアDRの方を振り返っており、仮想カメラの視線方向SLはドアDRの方に向いている。この視線方向SLは、第1の進行方向D1の例えば反対方向である。このような場合において本実施形態では、図9に示すように、ドアDRの領域(特異点SGに対応する領域)に第1の仮想空間VS1(部屋)の画像が表示される画像を生成する。
More specifically, as shown in FIG. 13A, the user character UC1 (virtual camera) passes through the location of the door DR (location corresponding to the singular point SG) in the first traveling direction D1, and the virtual camera It is assumed that the line-of-sight direction SL faces the direction opposite to the first traveling direction D1. That is, in FIG. 13A, after the user character UC1 passes through the door DR in the first traveling direction D1, the user DR looks back toward the door DR, and the visual line direction SL of the virtual camera is directed toward the door DR. . The line-of-sight direction SL is, for example, a direction opposite to the first traveling direction D1. In such a case, in the present embodiment, as shown in FIG. 9, an image in which an image of the first virtual space VS1 (room) is displayed in the region of the door DR (region corresponding to the singular point SG) is generated. .
一方、図13(A)に示すように、ユーザキャラクタUC1が、第2の仮想空間VS2において、位置P2から位置P3に移動し、仮想カメラの視線方向SLをドアDRの方に向けたとする。即ち位置P2からドアDRの裏側の方の位置P3に移動して、ドアDRの方に視線方向SLを向けたとする。この場合には、図9とは異なり、ドアDRの領域には部屋の画像は表示されない。具体的には図13(A)の位置P2において仮想カメラの視線方向SLがドアDRの方に向いていた場合には、図9のようにドアDRの開口領域には、その向こう側の部屋の画像が表示されている。これに対して図13(A)の位置P3において仮想カメラの視線方向SLがドアDRの方に向いていた場合には、ドアDRの開口領域には、氷の国の画像が表示され、部屋の画像は表示されない。即ち、氷河の上にドアDRの枠だけが立っているような画像が表示される。別の言い方をすると、氷河の上に、開いたドアDRが置いてあるだけの画像が表示される。
On the other hand, as shown in FIG. 13A, it is assumed that the user character UC1 moves from the position P2 to the position P3 in the second virtual space VS2, and the visual line direction SL of the virtual camera is directed toward the door DR. In other words, it is assumed that the position P2 moves from the position P2 to the position P3 on the back side of the door DR and the line-of-sight direction SL is directed toward the door DR. In this case, unlike FIG. 9, the room image is not displayed in the area of the door DR. Specifically, when the line-of-sight direction SL of the virtual camera is directed toward the door DR at the position P2 in FIG. 13A, the room on the other side of the door DR has an opening area as shown in FIG. Is displayed. On the other hand, when the viewing direction SL of the virtual camera is directed toward the door DR at the position P3 in FIG. 13A, an image of the ice country is displayed in the opening area of the door DR, and the room Images are not displayed. That is, an image in which only the frame of the door DR stands on the glacier is displayed. In other words, an image is displayed as long as the open door DR is placed on the glacier.
このような画像が表示されるのは、本実施形態では、第1の仮想空間と第2の仮想空間が特異点SGを介して不連続に結びつけられているからである。従って、図13(A)のように、位置P2から、移動元の位置である部屋の位置P1の方を見た場合には、ドアDRの開口領域を介して部屋の画像を見ることができるが、ドアDRの裏側の位置P3では、氷河の上に、ドアDRの枠(開いたドア)だけが立っているような画像が表示される。このようにすれば、第1の仮想空間から、第1の仮想空間とは異次元の空間である第2の仮想空間にワープしたような仮想現実の体験が可能になり、これまでにないタイプの仮想現実の実現が可能になる。
The reason why such an image is displayed is that in the present embodiment, the first virtual space and the second virtual space are connected discontinuously via the singular point SG. Therefore, as shown in FIG. 13A, when the room position P1, which is the movement source position, is viewed from the position P2, the room image can be viewed through the opening area of the door DR. However, at the position P3 on the back side of the door DR, an image is displayed in which only the frame of the door DR (open door) stands on the glacier. In this way, it is possible to experience a virtual reality experience such as warping from the first virtual space to the second virtual space, which is a different dimension from the first virtual space. Realization of virtual reality becomes possible.
なお図7、図9の画像は例えば以下のような手法により生成できる。例えば図7、図9において、仮想カメラから見て、ドアDR以外の領域(特異点に対応する領域以外の領域)にあるピクセルと、ドアDRの領域(特異点に対応する領域)にあるピクセルとを区別するためのフラグFGを、各ピクセル毎に設定する。例えばドアDR以外の領域では、フラグを例えばFG=1(第1のフラグ状態)に設定し、ドアDRの領域では、例えばフラグをFG=0(第2のフラグ状態)に設定する。そして仮想カメラから見える第1の仮想空間(部屋)の画像を第1の画像とし、仮想カメラから見える第2の仮想空間(氷の国)の画像を第2の画像とした場合に、図7においては、FG=1(ドア以外)に設定されるピクセルについては、第1の画像(部屋)の色を描画し、FG=0(ドア)に設定されるピクセルについては、第2の画像(氷の国)の色を描画する。このようにすることで図7に示すような画像を生成できる。一方、図9においては、FG=1(ドア以外)に設定されるピクセルについては、第2の画像(氷の国)の色を描画し、FG=0(ドア)に設定されるピクセルについては、第1の画像(部屋)の色を描画する。このようにすることで図9に示すような画像を生成できる。
7 and 9 can be generated by the following method, for example. For example, in FIG. 7 and FIG. 9, when viewed from the virtual camera, a pixel in an area other than the door DR (an area other than the area corresponding to the singular point) and a pixel in the door DR area (an area corresponding to the singular point). Is set for each pixel. For example, in the area other than the door DR, the flag is set to, for example, FG = 1 (first flag state), and in the area of the door DR, for example, the flag is set to FG = 0 (second flag state). When the image of the first virtual space (room) seen from the virtual camera is the first image, and the image of the second virtual space (ice country) seen from the virtual camera is the second image, FIG. , For pixels set to FG = 1 (other than the door), the color of the first image (room) is drawn, and for pixels set to FG = 0 (door), the second image ( Draw the color of the iceland). In this way, an image as shown in FIG. 7 can be generated. On the other hand, in FIG. 9, for pixels set to FG = 1 (other than the door), the color of the second image (ice country) is drawn, and for pixels set to FG = 0 (door) The color of the first image (room) is drawn. By doing so, an image as shown in FIG. 9 can be generated.
図13(B)は、ユーザキャラクタUC1がドアDR(特異点SG)の場所を通過しておらず、切替条件が成立していない場合の例である。このようにユーザキャラクタUC1が、位置P1からドアDRを通過せずに位置P2に移動しても、切替条件が成立しないため、仮想空間は切り替わらず、位置P2の情報は第1の仮想空間VS1(部屋)の位置情報として設定される。従って、位置P2において仮想カメラの視線方向SLをドアDRの方に向けても、ドアDRの開口部には、部屋の画像が表示される。即ち、部屋の床の上に、ドアDRの枠だけが立っているような画像が表示される。また図13(B)においてユーザキャラクタUC1が位置P2から位置P3に移動して、仮想カメラの視線方向SLをドアDRの方に向けても、部屋の床の上に、ドアDRの枠だけが立っているような画像が表示される。
FIG. 13B shows an example in which the user character UC1 does not pass through the location of the door DR (singular point SG) and the switching condition is not satisfied. Thus, even if the user character UC1 moves from the position P1 to the position P2 without passing through the door DR, the switching condition is not satisfied, so the virtual space is not switched, and the information of the position P2 is the first virtual space VS1. Set as (room) location information. Accordingly, even if the visual line direction SL of the virtual camera is directed toward the door DR at the position P2, a room image is displayed in the opening of the door DR. That is, an image in which only the frame of the door DR stands on the floor of the room is displayed. In FIG. 13B, even if the user character UC1 moves from the position P2 to the position P3 and the viewing direction SL of the virtual camera is directed toward the door DR, only the frame of the door DR is on the floor of the room. A standing image is displayed.
このように本実施形態では、図13(A)のように特異点SGを通過して切替条件が成立すると、通過後の位置P2、P3は、第2の仮想空間VS2の位置として設定される。これに対して図13(B)では、特異点SGを通過しておらず、切替条件が成立していない。このため、図13(A)と同じ位置P2、P3が、図13(B)では、第1の仮想空間VS1の位置のままとなる。
As described above, in this embodiment, when the switching condition is satisfied by passing through the singular point SG as shown in FIG. 13A, the positions P2 and P3 after passing are set as the positions of the second virtual space VS2. . On the other hand, in FIG. 13B, the singular point SG is not passed and the switching condition is not satisfied. For this reason, the same positions P2 and P3 as FIG. 13A remain in the position of the first virtual space VS1 in FIG. 13B.
また図14(A)では、ユーザキャラクタUC1(仮想カメラ)が、第1の進行方向D1でドアDR(特異点SG)を通過した後、第1の進行方向D1とは異なる第2の進行方向D2でドアDRを通過している。例えば、第1の仮想空間VS1の位置P1から第1の進行方向D1でドアDRを通過して、第2の仮想空間VS2の位置P2に移動した後、第1の進行方向D1(正方向)とは反対方向(逆極性の方向)の第2の進行方向D2(負方向)で、ドアDRを通過している。即ち位置P2から、ドアDRを通過して、位置P4に移動している。この場合にはユーザキャラクタUC1(仮想カメラ)の位置P4の情報を、第1の仮想空間VS1(部屋)の位置情報として設定する。そして第1の描画処理として、少なくとも第1の仮想空間VS1の画像を描画する処理を行う。即ち、第1の仮想空間VS1である部屋の画像を描画する処理を行う。或いは図7に示すように、第1の仮想空間VS1(部屋)の画像に加えて、第2の仮想空間VS2(氷の国)の画像を描画する処理を行って、ドアDR(特異点SG)の領域に、第2の仮想空間VS2の画像が表示される画像を生成する。例えば図14(A)に示すように、位置P4のユーザキャラクタUC1の仮想カメラの視線方向SLは、第2の進行方向D2の反対方向側を向いており、ドアDRの方を向いている。このような場合に、ドアDRの領域に、第2の仮想空間VS2(氷の国)の画像が表示される画像を生成する。このようにすることで、例えば第1の仮想空間VS1と第2の仮想空間VS2の間を特異点を介して行き来できるような仮想現実を実現できる。
In FIG. 14A, after the user character UC1 (virtual camera) passes through the door DR (singular point SG) in the first traveling direction D1, the second traveling direction is different from the first traveling direction D1. Passing through the door DR at D2. For example, after passing through the door DR in the first traveling direction D1 from the position P1 in the first virtual space VS1, and moving to the position P2 in the second virtual space VS2, the first traveling direction D1 (forward direction). Is passing through the door DR in the second traveling direction D2 (negative direction) in the opposite direction (reverse polarity direction). In other words, the vehicle has moved from the position P2 to the position P4 through the door DR. In this case, information on the position P4 of the user character UC1 (virtual camera) is set as position information on the first virtual space VS1 (room). And as a 1st drawing process, the process which draws the image of the 1st virtual space VS1 at least is performed. That is, a process of drawing an image of a room that is the first virtual space VS1 is performed. Alternatively, as shown in FIG. 7, in addition to the image of the first virtual space VS1 (room), a process of drawing an image of the second virtual space VS2 (ice country) is performed, and the door DR (singular point SG ), An image in which the image of the second virtual space VS2 is displayed is generated. For example, as shown in FIG. 14A, the line-of-sight direction SL of the virtual camera of the user character UC1 at the position P4 is facing the direction opposite to the second traveling direction D2 and facing the door DR. In such a case, an image in which an image of the second virtual space VS2 (ice country) is displayed in the area of the door DR is generated. By doing so, it is possible to realize a virtual reality that can be moved between the first virtual space VS1 and the second virtual space VS2 via a singular point, for example.
なお図14(B)では、第2の仮想空間VS2の位置P2から、ドアDRを通過することなく、位置P4に移動している。この場合には、仮想空間の切替条件が成立していないため、位置P4は、第2の仮想空間VS2の位置のままとなる。即ち、図13(A)では位置P4は第1の仮想空間VS1の位置に切り替わっているが、図13(B)では、同じ位置P4が、第2の仮想空間VS2の位置のままとなる。
In FIG. 14B, the second virtual space VS2 has moved from the position P2 to the position P4 without passing through the door DR. In this case, since the virtual space switching condition is not satisfied, the position P4 remains at the position of the second virtual space VS2. That is, in FIG. 13A, the position P4 is switched to the position of the first virtual space VS1, but in FIG. 13B, the same position P4 remains at the position of the second virtual space VS2.
また本実施形態では、図11(A)~図12で説明したように、ユーザキャラクタ(仮想カメラ)が特異点に対応する場所を通過することで切替条件が成立した場合に、ユーザキャラクタ(仮想カメラ)の位置情報を、第3の仮想空間VS3(電車の上)の位置情報として設定し、第3の描画処理として、少なくとも第3の仮想空間VS3の画像を描画する処理を行う。
Further, in the present embodiment, as described with reference to FIGS. 11A to 12, when the switching condition is satisfied when the user character (virtual camera) passes through a place corresponding to the singular point, the user character (virtual Camera) position information is set as position information of the third virtual space VS3 (on the train), and at least an image of the third virtual space VS3 is drawn as the third drawing process.
例えば図15(A)では、ユーザキャラクタUC1が、第1の仮想空間VS1の位置P1から、ドアDR(特異点SG)を第1の進行方向D1(正方向)で通過して、位置P2に移動している。これにより、ドアDRの通過による切替条件が成立したため、位置P2は第2の仮想空間VS2(氷の国)の位置として設定される。そしてユーザキャラクタUC1は、第2の仮想空間VS2の位置P2から、ドアDRを第2の進行方向D2(負方向)で通過して、位置P1に戻っている。これにより、ドアDRの通過による切替条件が成立したため、位置P1は第1の仮想空間VS1(部屋)の位置として設定される。
For example, in FIG. 15A, the user character UC1 passes through the door DR (singular point SG) in the first traveling direction D1 (positive direction) from the position P1 of the first virtual space VS1 to the position P2. Has moved. Thereby, since the switching condition by passage of the door DR is established, the position P2 is set as the position of the second virtual space VS2 (ice country). Then, the user character UC1 passes through the door DR in the second traveling direction D2 (negative direction) from the position P2 of the second virtual space VS2 and returns to the position P1. Thereby, since the switching condition by passage of the door DR is established, the position P1 is set as the position of the first virtual space VS1 (room).
このようにユーザキャラクタUC1が、第1の仮想空間VS1(部屋)の位置P1に戻った後、図15(B)では、ドアDRの開閉が行われている。これにより図6(C)~図6(E)で説明したように、ドアDRの向こう側の世界が、第2の仮想空間VS2(氷の国)から第3の仮想空間VS3(電車の上)に切り替わる。即ち図11(A)に示すような画像が表示されるようになる。そして図16に示すように、ユーザキャラクタUC1が、第1の仮想空間VS1の位置P1から、ドアDRを通過して、位置P2に移動している。これにより、ドアDRの通過による切替条件が成立したため、位置P2は第3の仮想空間VS3(電車の上)の位置として設定される。そして第3の描画処理として、少なくとも第3の仮想空間VS3の画像を描画する処理を行うことで、例えば図11(B)に示すような画像が生成される。またユーザキャラクタUC1がドアDRの方向を振り返った場合には、図12に示すような画像が生成される。即ち、第3の描画処理として、第3の仮想空間VS3(電車の上)の画像に加えて第1の仮想空間VS1(部屋)の画像を描画する処理を行って、ドアDRの領域に第1の仮想空間VS1の画像が表示される画像が生成される。このようにすれば、第1の仮想空間と第2の仮想空間の間の切替処理だけでなく、例えば第1の仮想空間と第3の仮想空間の間の切替処理も実現できるようになる。或いは第2の仮想空間と第3の仮想空間の間の切替処理も行うようにしてもよい。
Thus, after the user character UC1 returns to the position P1 of the first virtual space VS1 (room), the door DR is opened and closed in FIG. 15B. As a result, as described with reference to FIGS. 6C to 6E, the world beyond the door DR changes from the second virtual space VS2 (ice land) to the third virtual space VS3 (on the train). ). That is, an image as shown in FIG. 11A is displayed. Then, as shown in FIG. 16, the user character UC1 has moved from the position P1 of the first virtual space VS1 to the position P2 through the door DR. Thereby, since the switching condition by passage of the door DR is established, the position P2 is set as the position of the third virtual space VS3 (on the train). Then, as the third drawing process, at least an image of the third virtual space VS3 is drawn, thereby generating an image as shown in FIG. 11B, for example. When the user character UC1 looks back at the direction of the door DR, an image as shown in FIG. 12 is generated. That is, as the third drawing process, a process of drawing the image of the first virtual space VS1 (room) in addition to the image of the third virtual space VS3 (on the train) is performed, and the region of the door DR An image in which an image of one virtual space VS1 is displayed is generated. In this way, not only the switching process between the first virtual space and the second virtual space, but also a switching process between the first virtual space and the third virtual space, for example, can be realized. Alternatively, switching processing between the second virtual space and the third virtual space may be performed.
また本実施形態では、複数のユーザがプレイする場合に、複数のユーザに対応する複数のユーザキャラクタ(複数の仮想カメラ)が特異点に対応する場所を通過して第1の仮想空間に戻ったことを条件に、第3の描画処理を許可する手法を採用する。
In this embodiment, when a plurality of users play, a plurality of user characters (a plurality of virtual cameras) corresponding to the plurality of users pass through a place corresponding to a singular point and return to the first virtual space. On this condition, a technique for permitting the third drawing process is adopted.
例えば図17(A)では、ユーザキャラクタUC1、UC2に対応する複数のユーザがプレイしている。そしてユーザキャラクタUC1は、第1の仮想空間VS1からドアDRを通過して第2の仮想空間VS2に移動した後、再度、ドアDRを通過して、第1の仮想空間VS1に戻って来ている。これに対して、ユーザキャラクタUC2は、ドアDRを通過して第2の仮想空間VS2に移動した後、第2の仮想空間VS2に留まっている。
For example, in FIG. 17A, a plurality of users corresponding to the user characters UC1 and UC2 are playing. Then, the user character UC1 passes through the door DR from the first virtual space VS1 and moves to the second virtual space VS2, and then passes again through the door DR and returns to the first virtual space VS1. Yes. On the other hand, after the user character UC2 passes through the door DR and moves to the second virtual space VS2, the user character UC2 remains in the second virtual space VS2.
このような状況において、例えば図17(B)に示すようにドアDRの開閉が行われたとする。このとき、図6(C)~図6(E)に示すように、第2の仮想空間VS2(氷の国)から第3の仮想空間VS3(電車の上)に切り替わってしまうと、図17(B)においてユーザキャラクタUC2が第2の仮想空間VS2に取り残されてしまう事態が生じてしまう。
In such a situation, it is assumed that the door DR is opened and closed as shown in FIG. At this time, as shown in FIGS. 6C to 6E, when the second virtual space VS2 (the land of ice) is switched to the third virtual space VS3 (on the train), FIG. In (B), a situation occurs in which the user character UC2 is left in the second virtual space VS2.
そこで本実施形態では、このような事態の発生を防止するために、ユーザキャラクタUC1、UC2の両方が第1の仮想空間VS1に戻ったことを条件に、第3の仮想空間における第3の描画処理を許可する。
Therefore, in the present embodiment, in order to prevent the occurrence of such a situation, the third drawing in the third virtual space is performed on the condition that both of the user characters UC1 and UC2 have returned to the first virtual space VS1. Allow processing.
即ち、図18(A)では、ユーザキャラクタUC1のみならず、ユーザキャラクタUC2も、第2の仮想空間VS2の位置からドアDRを通過して第1の仮想空間VS1に戻っている。そして、このようにユーザキャラクタUC1、UC2が第1の仮想空間VS1に戻った後に、ドアDRの開閉が行われている。このときに、図18(B)に示すように、ユーザキャラクタUC1、UC2がドアDRを通過して切替条件が満たされると、ユーザキャラクタUC1、UC2の位置は第3の仮想空間VS3(電車の上)の位置として設定される。そして、第3の仮想空間VS3での第3の描画処理が行われて、図11(B)、図12に示すような画像が生成されるようになる。このようにすれば、複数のユーザに対応する複数のユーザキャラクタの一部が、第2の仮想空間に取り残されてしまい、ゲーム進行等に破綻を来すなどの事態を防止できるようになり、適切でスムーズなゲーム進行を実現できるようになる。
That is, in FIG. 18A, not only the user character UC1 but also the user character UC2 passes through the door DR from the position of the second virtual space VS2 and returns to the first virtual space VS1. Then, after the user characters UC1 and UC2 return to the first virtual space VS1, the door DR is opened and closed. At this time, as shown in FIG. 18B, when the user characters UC1 and UC2 pass the door DR and the switching condition is satisfied, the positions of the user characters UC1 and UC2 are in the third virtual space VS3 (train of the train). Set as the top position. Then, the third drawing process in the third virtual space VS3 is performed, and images as shown in FIGS. 11B and 12 are generated. In this way, it becomes possible to prevent a situation in which some of the plurality of user characters corresponding to the plurality of users are left in the second virtual space and the game progresses, etc. Appropriate and smooth game progress can be realized.
また本実施形態ではユーザの入力情報又はセンサの検出情報に基づいて、切替条件が成立したか否かを判断してもよい。
In the present embodiment, it may be determined whether the switching condition is satisfied based on user input information or sensor detection information.
例えば図19(A)では、マイク等を使って、「氷の国に行きたい」という音声情報が、ユーザUS1の入力情報として入力されている。このようなユーザUS1の入力情報に基づいて、切替条件が成立したか否かを判断して、例えば仮想空間の切替処理を行ってもよい。このようにすることで、入力情報を入力するだけという簡素な手間で、仮想空間の切替処理を実現できるようになる。
For example, in FIG. 19A, voice information “I want to go to the iceland” is input as input information of the user US1 using a microphone or the like. Based on such input information of the user US1, it may be determined whether or not a switching condition is satisfied, and for example, a virtual space switching process may be performed. In this way, the virtual space switching process can be realized with a simple effort of only inputting input information.
なお、切替条件の判断に使用されるユーザの入力情報は、例えばゲームコントローラ(広義には操作部)により入力される操作情報であってもよい。或いは、リープモーションの処理によりユーザの手や指の動きを検出できる場合には、ユーザの手や指の動きによる入力情報を、ユーザの入力情報として用いて、切替条件を判断するようにしてもよい。或いは、ユーザの頭部の動きを、HMDのセンサ部等に基づき検出して、ユーザの入力情報として用いるようにしてもよい。
Note that the user input information used for determining the switching condition may be, for example, operation information input by a game controller (operation unit in a broad sense). Alternatively, when the movement of the user's hand or finger can be detected by the leap motion process, the switching condition is determined using the input information based on the movement of the user's hand or finger as the user's input information. Good. Alternatively, the movement of the user's head may be detected based on an HMD sensor unit or the like and used as user input information.
また本実施形態では、センサの検出情報に基づいて、切替条件が成立したか否かを判断してもよい。例えば図19(B)では、ドアDRに対してセンサSEが設けられている。そしてこの場合に、センサSEの検出情報に基づいて、ドアDRの開閉状態を判断して、切替条件が成立したか否かを判断してもよい。センサSEとしては、例えば図2(A)で説明したような受光素子を用いてもよい。このようにすれば、図4のように部屋に設けられたベースステーション280、284からの光を、センサSEである受光素子で検出することで、ドアDRの開閉状態を判断できるようになる。従って、簡素な処理で切替条件を判断することが可能になる。或いは、センサSEとして、加速度センサやジャイロセンサなどから構成されるモーションセンサを用いてもよい。このようなモーションセンサを用いることで、ドアDRの開閉状態等を判断して、仮想空間の切替条件が成立したか否かを判断してもよい。このように、センサSEとしては種々の方式のセンサを採用することができる。
In the present embodiment, it may be determined whether the switching condition is satisfied based on the detection information of the sensor. For example, in FIG. 19B, a sensor SE is provided for the door DR. In this case, the opening / closing state of the door DR may be determined based on the detection information of the sensor SE to determine whether the switching condition is satisfied. As the sensor SE, for example, a light receiving element as described with reference to FIG. In this way, the open / closed state of the door DR can be determined by detecting the light from the base stations 280 and 284 provided in the room as shown in FIG. 4 by the light receiving element as the sensor SE. Therefore, it is possible to determine the switching condition with a simple process. Alternatively, a motion sensor composed of an acceleration sensor, a gyro sensor, or the like may be used as the sensor SE. By using such a motion sensor, the open / close state of the door DR may be determined to determine whether or not the virtual space switching condition is satisfied. As described above, various types of sensors can be employed as the sensor SE.
また本実施形態では図4、図20(A)、図20(B)に示すように、ユーザ(US1、US2)が移動する実空間のプレイフィールドFLには、特異点に対応する物体が配置されている。例えば特異点に対応する物体としてドアDRが配置されている。そして本実施形態では、ユーザが実空間において、物体であるドアDRの場所を通過した場合に、ユーザキャラクタ(仮想カメラ)が、特異点に対応する場所を通過したと判断する。
In this embodiment, as shown in FIGS. 4, 20 (A), and 20 (B), an object corresponding to a singular point is arranged in the play field FL in the real space where the users (US 1, US 2) move. Has been. For example, a door DR is arranged as an object corresponding to a singular point. In the present embodiment, when the user passes through the location of the door DR, which is an object, in real space, it is determined that the user character (virtual camera) has passed the location corresponding to the singular point.
このようにすれば、実空間の物体であるドアの場所をユーザが実際に通過することで、対応するユーザキャラクタがドアの場所を通過したと判断されるようになるため、ユーザの仮想現実感を向上できる。この場合に、実空間のドアを模したドアのオブジェクトを仮想空間に配置することで、実空間のドアと仮想空間のドアのオブジェクトとが適正に対応づけられるようになり、ユーザの仮想現実感を更に向上できる。また、例えば図19(B)のように実空間のドアにセンサを設け、このセンサの検出情報に基づいて、仮想空間のドアのオブジェクトの開閉状態を制御すれば、実空間のドアの開閉に連動して仮想空間のドアのオブジェクトも開閉されるようになり、ユーザの仮想現実感を格段に向上できるようになる。
In this way, since the user actually passes the location of the door, which is an object in the real space, it is determined that the corresponding user character has passed the location of the door. Can be improved. In this case, by arranging the door object simulating the door of the real space in the virtual space, the door of the real space and the object of the virtual space door can be properly associated with each other, and the virtual reality of the user Can be further improved. Further, for example, as shown in FIG. 19B, if a sensor is provided on a door in the real space and the opening / closing state of the object in the virtual space is controlled based on the detection information of the sensor, the door in the real space can be opened and closed. In conjunction with this, the door object in the virtual space is also opened and closed, and the virtual reality of the user can be greatly improved.
なお図20(A)では、B1に示すように、ユーザキャラクタUC1の頭部や上半身部分だけが、ドアDRを通過しており、第2の仮想空間VS2(氷の国)に移動している。一方、ユーザキャラクタUC1の下半身部分は、第1の仮想空間VS1(部屋)に残っている。この場合に第1の仮想空間VS1に位置するユーザキャラクタUC2(ユーザUS2)からは、ユーザキャラクタUC1の頭部や上半身部分が消えて見えるようになり、ドアDRの向こう側が別世界であるという不思議な仮想現実を体験できる。またユーザキャラクタUC2が、ドアDRを通過せずに、B2に示すようにドアDRの向こう側に移動して見た場合には、更に違った見え方になる。また図20(B)のようにユーザキャラクタUC1(ユーザUS1)は、ドアDRをくぐって、後ろ側を振り返っても、第2の仮想空間である氷の国しか見えず、第1の仮想空間である部屋は見えない。従って、2つの仮想空間が、特異点であるドアDRを介してつながっているという不思議な感覚を与えることができ、これまでにない不思議な仮想現実を、体験させることが可能になる。
In FIG. 20A, as shown by B1, only the head and upper body part of the user character UC1 pass through the door DR and move to the second virtual space VS2 (ice country). . On the other hand, the lower body part of the user character UC1 remains in the first virtual space VS1 (room). In this case, from the user character UC2 (user US2) located in the first virtual space VS1, the head and upper body of the user character UC1 appear to disappear, and it is strange that the other side of the door DR is a different world. Experience virtual reality. Further, when the user character UC2 moves to the other side of the door DR as shown in B2 without passing through the door DR, the appearance is further different. Further, as shown in FIG. 20B, even if the user character UC1 (user US1) passes through the door DR and looks back, the user character UC1 (user US1) can see only the ice country that is the second virtual space, and thus the first virtual space. I cannot see the room. Therefore, it is possible to give a mysterious feeling that the two virtual spaces are connected via the door DR, which is a singular point, and to experience a mysterious virtual reality that has never existed.
なお図20(A)、図20(B)に示すような画像の生成を適切に行うためには、例えばユーザキャラクタの頭部の位置情報の検出に加えて、足などの下半身部分の位置情報を検出することが望ましい。また図20(A)、図20(B)に示すように、特異点に対応する領域は、少なくともユーザの視点に対応する仮想カメラが通過していればよく、ユーザキャラクタの全体が通過している必要はない。
In order to appropriately generate the images as shown in FIGS. 20A and 20B, for example, in addition to the detection of the position information of the head of the user character, the position information of the lower body part such as the foot It is desirable to detect. Also, as shown in FIGS. 20A and 20B, the region corresponding to the singular point only needs to pass at least the virtual camera corresponding to the user's viewpoint, and the entire user character passes. There is no need to be.
3.3 種々の処理例
次に本実施形態の種々の処理例について説明する。本実施形態では、特異点を設定又は非設定にする処理を行うようにしてもよい。 3.3 Various Processing Examples Next, various processing examples of this embodiment will be described. In the present embodiment, a process for setting or not setting a singular point may be performed.
次に本実施形態の種々の処理例について説明する。本実施形態では、特異点を設定又は非設定にする処理を行うようにしてもよい。 3.3 Various Processing Examples Next, various processing examples of this embodiment will be described. In the present embodiment, a process for setting or not setting a singular point may be performed.
例えば図21(A)では、ユーザキャラクタUC1は第2の仮想空間VS2(氷の国)に位置しており、図8とは異なり、特異点SGに対応するドアDRは表示されていない。そして図21(A)では、ユーザキャラクタUC1に対応するユーザUS1が、例えば「現れよドア」という音声を入力している。これにより特異点SGを設定する処理が行われ、図21(B)に示すように、特異点SGに対応するドアDRが、第2の仮想空間VS2に現れて、表示されるようになる。或いは、図21(B)において、ユーザUS1が、例えば「消えろドア」という音声を入力することで、特異点SGを非設定にする処理を行って、例えば図21(A)に示すように、ドアDRを非表示にしてもよい。
For example, in FIG. 21A, the user character UC1 is located in the second virtual space VS2 (ice country), and unlike FIG. 8, the door DR corresponding to the singular point SG is not displayed. In FIG. 21A, the user US1 corresponding to the user character UC1 inputs, for example, a voice “Appearance door”. Thereby, the process of setting the singular point SG is performed, and the door DR corresponding to the singular point SG appears in the second virtual space VS2 and is displayed as shown in FIG. Alternatively, in FIG. 21 (B), the user US1 inputs a voice of “disappearing door”, for example, to perform the process of unsetting the singular point SG, for example, as shown in FIG. 21 (A), The door DR may be hidden.
このように、ユーザの入力情報などに基づいて、特異点の設定、非設定を任意に切り替えられるようにすれば、例えばゲーム状況等に応じて、特異点に対応するオブジェクトを表示したり、非表示にすることなどが可能になる。これにより、例えば特異点を隠したりすることなどが可能になり、より多様なゲーム処理やゲーム演出を実現できるようになる。なお、特異点の設定、非設定は、例えばユーザの入力情報以外にも、例えばゲーム処理の結果やユーザのゲームレベルなどに応じて、切り替えるようにしてもよい。
In this way, if the setting or non-setting of singular points can be arbitrarily switched based on user input information or the like, for example, an object corresponding to a singular point can be displayed or non- It becomes possible to display. As a result, for example, it is possible to hide singular points and the like, and more various game processes and game effects can be realized. Note that the setting and non-setting of singular points may be switched according to, for example, the result of game processing, the game level of the user, etc., in addition to the user input information, for example.
また本実施形態では、ユーザに仮想現実を体験させるための体感装置を制御する処理を行い、切替条件が成立していない場合の体感装置の制御と、切替条件が成立した場合の体感装置の制御を異ならせてもよい。
Further, in the present embodiment, processing for controlling the sensation device for allowing the user to experience virtual reality is performed, and control of the sensation device when the switching condition is not satisfied and control of the sensation device when the switching condition is satisfied May be different.
例えば図22(A)では、ユーザキャラクタUC1(US1)は、ドアDRを通過せずにドアDRの向こう側に移動しており、仮想空間の切替条件は成立していない。この場合には、送風機BL、振動デバイスVBなどの体感装置を動作させない。一方、図22(B)では、ユーザキャラクタUC1は、ドアDRを通過してドアDRの向こう側に移動しており、仮想空間の切替条件が成立している。この場合には、送風機BL、振動デバイスVBなどの体感装置を動作させる。例えば送風機BLによる冷風の送風を行ったり、振動デバイスVBにより床を振動させるなどの制御を行う。こうすることで、図10(A)、図10(B)で説明したように、仮想空間が、第1の仮想空間(部屋)から第2の仮想空間(氷の国)に切り替わったことを、送風機BL、振動デバイスVBなどの体感装置を用いて、ユーザに体感させることが可能になる。
For example, in FIG. 22 (A), the user character UC1 (US1) moves beyond the door DR without passing through the door DR, and the virtual space switching condition is not satisfied. In this case, sensation devices such as the blower BL and the vibration device VB are not operated. On the other hand, in FIG. 22B, the user character UC1 passes through the door DR and moves to the other side of the door DR, and the virtual space switching condition is satisfied. In this case, sensation devices such as the blower BL and the vibration device VB are operated. For example, control such as blowing cool air by the blower BL or vibrating the floor by the vibration device VB is performed. By doing so, as described in FIG. 10A and FIG. 10B, the virtual space is switched from the first virtual space (room) to the second virtual space (ice country). It is possible to let the user experience using a sensation apparatus such as the blower BL and the vibration device VB.
なお、切替条件が成立していない場合と切替条件が成立した場合とで、体感装置の制御態様や制御する体感装置の種類を異ならせてもよい。例えば送風機BLの送風のパワーを異ならせたり、振動デバイスVBの振動度合いを異ならせてもよい。或いは、第1の仮想空間用の第1の体感装置と、第2の仮想空間用の第2の体感装置を用意し、切替条件が成立していない場合には、第1の体感装置を動作させ、切替条件が成立した場合には、第2の体感装置を動作させるようにしてもよい。
Note that the control mode of the sensation apparatus and the type of sensation apparatus to be controlled may be different depending on whether the switching condition is not satisfied or when the switching condition is satisfied. For example, the power of the blower BL may be varied, or the vibration level of the vibration device VB may be varied. Alternatively, the first sensation device for the first virtual space and the second sensation device for the second virtual space are prepared, and the first sensation device is operated when the switching condition is not satisfied. When the switching condition is satisfied, the second sensation apparatus may be operated.
また本実施形態では、複数のユーザがプレイする場合に、実空間でのユーザ間の衝突の報知情報の出力処理を行う。
Also, in the present embodiment, when a plurality of users play, notification information output processing for collision between users in real space is performed.
例えば図23(A)では、ユーザキャラクタUC1、UC2に対応する複数のユーザUS1、US2がゲームをプレイしている。そしてユーザキャラクタUC1はドアDRを通過して移動することで、第2の仮想空間VS2に移動しているが、ユーザキャラクタUC2は、ドアDRを通過しておらず、第1の仮想空間VS1に留まっている。そして図23(B)では、ユーザキャラクタUC1が第2の仮想空間VS2内を移動して、ユーザキャラクタUC2の方に近づいている。
For example, in FIG. 23A, a plurality of users US1, US2 corresponding to the user characters UC1, UC2 are playing a game. The user character UC1 moves through the door DR to move to the second virtual space VS2, but the user character UC2 does not pass through the door DR and enters the first virtual space VS1. Stays. In FIG. 23B, the user character UC1 moves in the second virtual space VS2 and approaches the user character UC2.
この場合にユーザキャラクタUC1は第1の仮想空間VS1に留まっているため、ユーザUS1のHMD1には、第1の仮想空間VS1(部屋)の画像が表示される。一方、ユーザキャラクタUC2は第2の仮想空間VS2に移動しているため、ユーザUS2のHMD2には、第2の仮想空間VS2(氷の国)の画像が表示される。従って、HMD1により視界が覆われているユーザUS1は、ユーザUS2に対応するユーザキャラクタUC2の存在を視覚的に認識できない。同様にHMD2により視界が覆われているユーザUS2は、ユーザUS1に対応するユーザキャラクタUC1の存在を視覚的に認識できない。従って、実世界においてユーザUS1、US2が衝突してしまう事態が生じるおそれがある。
In this case, since the user character UC1 remains in the first virtual space VS1, an image of the first virtual space VS1 (room) is displayed on the HMD1 of the user US1. On the other hand, since the user character UC2 has moved to the second virtual space VS2, an image of the second virtual space VS2 (ice country) is displayed on the HMD2 of the user US2. Therefore, the user US1 whose field of view is covered by the HMD1 cannot visually recognize the presence of the user character UC2 corresponding to the user US2. Similarly, the user US2 whose field of view is covered by the HMD2 cannot visually recognize the presence of the user character UC1 corresponding to the user US1. Therefore, there is a possibility that a situation occurs in which the users US1 and US2 collide in the real world.
そこで本実施形態では、このような事態の発生を防止するために、複数のユーザがプレイする場合に、実空間でのユーザ間の衝突の報知情報の出力処理を行う。例えば図24は、ユーザUS1のHMD1の表示画像の例である。この表示画像には、ユーザUS1の手や指等が表示されている。
Therefore, in the present embodiment, in order to prevent the occurrence of such a situation, when a plurality of users play, an output process of notification information of collision between users in real space is performed. For example, FIG. 24 is an example of a display image of the HMD1 of the user US1. In this display image, a hand, a finger, and the like of the user US1 are displayed.
そして図23(B)のような状況の場合に、ユーザUS1のHMD1には、通常では、ユーザキャラクタUC2は表示されない。これに対して図24では、ユーザキャラクタUC2を、例えばゴーストキャラクタのように薄らと表示する。例えば所与の半透明度で背景等と半透明合成されたユーザキャラクタUC2を表示する。こうすることで、ユーザUS1は、ユーザキャラクタUC2に対応するユーザUS2が近くに居ることを視覚的に認識できるようになり、実世界でのユーザ間の衝突を回避できるようになる。この場合にユーザUS2のHMD2においても、ユーザUS1に対応するユーザキャラクタUC1を、例えば背景等と半透明合成することなどにより、ゴーストキャラクタのように薄らと表示する。こうすることで、ユーザUS2は、ユーザキャラクタUC1に対応するユーザUS1が近くに居ることを視覚的に認識できるようになり、実世界でのユーザ間の衝突を回避できるようになる。
In the case of the situation as shown in FIG. 23B, the user character UC2 is not normally displayed on the HMD1 of the user US1. On the other hand, in FIG. 24, the user character UC2 is displayed as thin as, for example, a ghost character. For example, the user character UC2 that is translucently synthesized with the background or the like with a given translucency is displayed. By doing so, the user US1 can visually recognize that the user US2 corresponding to the user character UC2 is nearby, and can avoid a collision between users in the real world. In this case, the HMD2 of the user US2 also displays the user character UC1 corresponding to the user US1 as thin as a ghost character by, for example, semi-transparent composition with the background or the like. By doing so, the user US2 can visually recognize that the user US1 corresponding to the user character UC1 is nearby, and can avoid a collision between users in the real world.
なお、衝突の危険度に応じて、図24のユーザキャラクタUC2の表示態様を変化させてもよい。例えばユーザキャラクタUC1、UC2の距離が近づくにつれて、或いは、ユーザキャラクタUC1やUC2の接近速度が速くなるにつれて、ユーザキャラクタUC2の表示を徐々に濃くするなどして、視覚の認識度合いを高める処理を行ってもよい。
Note that the display mode of the user character UC2 in FIG. 24 may be changed according to the risk of collision. For example, as the distance between the user characters UC1 and UC2 approaches, or as the approach speed of the user characters UC1 and UC2 increases, the display of the user character UC2 is gradually darkened to increase the degree of visual recognition. May be.
また図24のC1に示すように衝突の警告を報知するような情報を、HMDの表示画像に重畳表示してもよい。或いは、音声や振動により衝突の警告を報知してもよい。このように報知情報の出力処理としては種々の態様の出力処理を想定できる。
Further, as shown at C1 in FIG. 24, information for notifying a collision warning may be superimposed on the display image of the HMD. Or you may alert | report a warning of a collision with an audio | voice or a vibration. As described above, various types of output processing can be assumed as the notification information output processing.
なお図25(A)では、ユーザキャラクタUC1、UC2が第2の仮想空間VS2に移動した後、ユーザキャラクタUC2がドアDRの裏側の位置に移動している。そして図25(B)に示すように、ユーザキャラクタUC2は、ドアDRの裏側の位置から、ユーザキャラクタUC1の方に移動して、ドアDRを通過している。
In FIG. 25A, after the user characters UC1 and UC2 have moved to the second virtual space VS2, the user character UC2 has moved to a position behind the door DR. As shown in FIG. 25B, the user character UC2 moves from the position on the back side of the door DR toward the user character UC1 and passes through the door DR.
この場合にユーザキャラクタUC1の仮想カメラの視線方向SLは、ドアDRの方向を向いている。このため図9に示すように、ドアDRの領域が部屋の画像となる画像が表示される。そしてこの状態で図25(B)に示すように、第2の仮想空間VS2のユーザキャラクタUC2がドアDRを通過すると、図9の部屋の画像の中から突然にユーザキャラクタUC2が飛び出して来るような画像が表示されるようになる。
In this case, the visual line direction SL of the virtual camera of the user character UC1 faces the direction of the door DR. For this reason, as shown in FIG. 9, an image in which the region of the door DR becomes an image of the room is displayed. In this state, as shown in FIG. 25B, when the user character UC2 in the second virtual space VS2 passes through the door DR, the user character UC2 suddenly pops out from the room image in FIG. Correct images will be displayed.
また本実施形態では、実空間でのユーザの位置情報を取得し、取得された位置情報に基づいて、ユーザキャラクタを移動させる処理を行うことが望ましい。例えば図26(A)において、実空間でのユーザUS1の位置PRの情報を取得する。この実空間での位置PRの情報は、例えば図2(A)~図3(B)で説明したHMD1のトラッキング処理などにより取得できる。そして、取得された実空間での位置PRの情報に基づいて、図26(B)に示す仮想空間のユーザキャラクタUC1を移動させる処理を行う。即ち、図26(A)に示す実空間でのユーザUS1の位置PRと、図26(B)に示す仮想空間でのユーザキャラクタUC1の位置PVとを対応づけ、位置PRの移動に連動するように位置PVを変化させて、ユーザキャラクタUC1を移動させる。そしてユーザUS1が装着するHMD1の表示画像を、本実施形態の手法により生成する。
Further, in the present embodiment, it is desirable to acquire the position information of the user in the real space and perform a process of moving the user character based on the acquired position information. For example, in FIG. 26A, information on the position PR of the user US1 in the real space is acquired. The information on the position PR in the real space can be acquired by, for example, the tracking process of the HMD 1 described with reference to FIGS. 2 (A) to 3 (B). Then, based on the acquired information of the position PR in the real space, a process of moving the user character UC1 in the virtual space shown in FIG. That is, the position PR of the user US1 in the real space shown in FIG. 26A and the position PV of the user character UC1 in the virtual space shown in FIG. 26B are associated with each other so as to be linked to the movement of the position PR. The position PV is changed to move the user character UC1. And the display image of HMD1 which user US1 wears is generated by the method of this embodiment.
具体的には切替条件が成立する前においては、実空間でのユーザUS1の位置PRの情報により特定されるユーザキャラクタUC1(仮想カメラ)の位置PVの情報を、第1の仮想空間の位置情報として設定する。一方、切替条件が成立した場合には、実空間でのユーザUS1の位置PRの情報により特定されるユーザキャラクタUC1(仮想カメラ)の位置PVの情報を、第2の仮想空間の位置情報として設定する。
Specifically, before the switching condition is satisfied, the information on the position PV of the user character UC1 (virtual camera) specified by the information on the position PR of the user US1 in the real space is used as the position information on the first virtual space. Set as. On the other hand, when the switching condition is satisfied, the information on the position PV of the user character UC1 (virtual camera) specified by the information on the position PR of the user US1 in the real space is set as the position information on the second virtual space. To do.
このようにすれば、実空間においてユーザUS1がプレイフィールドFLを移動すると、その移動に対応するようにユーザキャラクタUC1も移動するようになる。そしてユーザUS1が、実空間でのドアDRの物体の場所を通過したか否かで、切替条件が成立したか否かが判断され、図13(A)、13(B)に示すような仮想空間の切替処理や描画処理が実行されるようになる。
In this manner, when the user US1 moves the play field FL in the real space, the user character UC1 also moves to correspond to the movement. Then, whether or not the switching condition is satisfied is determined based on whether or not the user US1 has passed through the location of the object of the door DR in the real space, and the virtual as shown in FIGS. 13 (A) and 13 (B). Space switching processing and drawing processing are executed.
この場合に図13(A)に示すようにドアDRを通過して、位置P2、P3に移動した場合と、図13(B)に示すようにドアDRを通過せずに、位置P2、P3に移動した場合とで、HMD1に表示される画像が異なるようになる。即ち、実空間では同じ位置P2、P3にいる場合にも、図13(A)ではHMD1に対し第2の仮想空間VS2(氷の国)での画像が表示されるが、図13(B)ではHMD1に対し第1の仮想空間VS1(部屋)での画像が表示されるようになる。従って、これまでのシミュレーションシステムでは実現できなかった不思議なVR体験を、ユーザに提供することが可能になる。
In this case, as shown in FIG. 13 (A), when passing through the door DR and moving to the positions P2, P3, and as shown in FIG. 13 (B), without passing through the door DR, the positions P2, P3. The image displayed on the HMD 1 is different depending on the movement. That is, even in the real space at the same positions P2 and P3, the image in the second virtual space VS2 (Iceland) is displayed on the HMD1 in FIG. 13A, but FIG. Then, an image in the first virtual space VS1 (room) is displayed on the HMD1. Therefore, it becomes possible to provide the user with a mysterious VR experience that could not be realized with conventional simulation systems.
4.詳細な処理
次に本実施形態の詳細な処理例について図27のフローチャートを用いて説明する。まず切替条件が成立したか否かを判断する(ステップS1)。そして例えば特異点に対応する場所をユーザキャラクタが通過しておらず、切替条件が成立していない場合には、ユーザキャラクタ又は仮想カメラの位置情報を、第1の仮想空間の位置情報として設定する(ステップS2)。例えば図13(B)に示すように位置情報を設定する。そして第1の仮想空間の画像に加えて第2の仮想空間の画像を描画する処理を行って、特異点に対応する領域に第2の仮想空間の画像が表示される画像を生成する(ステップS3)。例えば図7に示すように、特異点に対応する領域であるドアDRの領域に、第2の仮想空間VS2である氷の国の画像が表示される画像を生成する。 4). Detailed Processing Next, a detailed processing example of this embodiment will be described with reference to the flowchart of FIG. First, it is determined whether or not the switching condition is satisfied (step S1). For example, when the user character does not pass through the place corresponding to the singular point and the switching condition is not satisfied, the position information of the user character or the virtual camera is set as the position information of the first virtual space. (Step S2). For example, position information is set as shown in FIG. Then, in addition to the image of the first virtual space, a process of drawing the image of the second virtual space is performed to generate an image in which the image of the second virtual space is displayed in the region corresponding to the singular point (step S3). For example, as shown in FIG. 7, an image is generated in which an image of the ice country that is the second virtual space VS2 is displayed in the region of the door DR that is a region corresponding to the singular point.
次に本実施形態の詳細な処理例について図27のフローチャートを用いて説明する。まず切替条件が成立したか否かを判断する(ステップS1)。そして例えば特異点に対応する場所をユーザキャラクタが通過しておらず、切替条件が成立していない場合には、ユーザキャラクタ又は仮想カメラの位置情報を、第1の仮想空間の位置情報として設定する(ステップS2)。例えば図13(B)に示すように位置情報を設定する。そして第1の仮想空間の画像に加えて第2の仮想空間の画像を描画する処理を行って、特異点に対応する領域に第2の仮想空間の画像が表示される画像を生成する(ステップS3)。例えば図7に示すように、特異点に対応する領域であるドアDRの領域に、第2の仮想空間VS2である氷の国の画像が表示される画像を生成する。 4). Detailed Processing Next, a detailed processing example of this embodiment will be described with reference to the flowchart of FIG. First, it is determined whether or not the switching condition is satisfied (step S1). For example, when the user character does not pass through the place corresponding to the singular point and the switching condition is not satisfied, the position information of the user character or the virtual camera is set as the position information of the first virtual space. (Step S2). For example, position information is set as shown in FIG. Then, in addition to the image of the first virtual space, a process of drawing the image of the second virtual space is performed to generate an image in which the image of the second virtual space is displayed in the region corresponding to the singular point (step S3). For example, as shown in FIG. 7, an image is generated in which an image of the ice country that is the second virtual space VS2 is displayed in the region of the door DR that is a region corresponding to the singular point.
一方、ユーザキャラクタ又は移動体が特異点に対応する場所を通過して、切替条件が成立すると、ユーザキャラクタ又は仮想カメラの位置情報を、第2の仮想空間の位置情報として設定する(ステップS4)。例えば図13(A)に示すように位置情報を設定する。そして第2の仮想空間の画像に加えて第1の仮想空間の画像を描画する処理を行って、特異点に対応する領域に第1の仮想空間の画像が表示される画像を生成する(ステップS5)。例えば図9に示すように、特異点に対応する領域であるドアDRの領域に、第1の仮想空間VS1である部屋の画像が表示される画像を生成する。
On the other hand, when the user character or the moving object passes through the place corresponding to the singular point and the switching condition is satisfied, the position information of the user character or the virtual camera is set as the position information of the second virtual space (step S4). . For example, position information is set as shown in FIG. Then, in addition to the image of the second virtual space, a process of drawing the image of the first virtual space is performed to generate an image in which the image of the first virtual space is displayed in the region corresponding to the singular point (step S5). For example, as shown in FIG. 9, an image in which an image of a room that is the first virtual space VS1 is displayed in the region of the door DR that is a region corresponding to the singular point is generated.
なお、上記のように本実施形態について詳細に説明したが、本発明の新規事項および効果から実体的に逸脱しない多くの変形が可能であることは当業者には容易に理解できるであろう。従って、このような変形例はすべて本発明の範囲に含まれるものとする。例えば、明細書又は図面において、少なくとも一度、より広義または同義な異なる用語(ユーザ移動体、特異点等)と共に記載された用語(ユーザキャラクタ、ドア等)は、明細書又は図面のいかなる箇所においても、その異なる用語に置き換えることができる。また位置情報の取得処理、仮想空間の設定処理、移動体の移動処理、表示処理、ゲーム処理、報知処理、体感装置の制御処理、仮想空間の切替処理等も、本実施形態で説明したものに限定されず、これらと均等な手法・処理・構成も本発明の範囲に含まれる。また本発明は種々のゲームに適用できる。また本発明は、業務用ゲーム装置、家庭用ゲーム装置、又は多数のユーザが参加する大型アトラクションシステム等の種々のシミュレーションシステムに適用できる。
Although the present embodiment has been described in detail as described above, it will be readily understood by those skilled in the art that many modifications can be made without departing from the novel matters and effects of the present invention. Accordingly, all such modifications are intended to be included in the scope of the present invention. For example, a term (user character, door, etc.) written together with a different term (user moving object, singular point, etc.) in a broader sense or the same meaning at least once in the specification or drawing is used anywhere in the specification or drawing. Can be replaced by its different terms. In addition, position information acquisition processing, virtual space setting processing, moving body movement processing, display processing, game processing, notification processing, sensory device control processing, virtual space switching processing, and the like are also described in this embodiment. The present invention is not limited, and techniques, processes, and configurations equivalent to these are also included in the scope of the present invention. The present invention can be applied to various games. Further, the present invention can be applied to various simulation systems such as a business game device, a home game device, or a large attraction system in which a large number of users participate.
US1、US2、US ユーザ、
UC1、UC2 ユーザキャラクタ(ユーザ移動体)、
VS1 第1の仮想空間、VS2 第2の仮想空間、DR ドア、
SG 特異点、FL プレイフィールド、BL 送風機、
VB 振動デバイス、DK 机、BS 本棚、WD1、WD2 窓、
TR 電車、TN トンネル、SL 視線方向、
100 処理部、102 入力処理部、110 演算処理部、
111 情報取得部、112 仮想空間設定部、113 移動体処理部、
114 仮想カメラ制御部、115 ゲーム処理部、116 報知処理部、
117 体感装置制御部、120 表示処理部、130 音処理部、
140 出力処理部、150 撮像部、151、152 カメラ、
160 操作部、170 記憶部、172 オブジェクト情報記憶部、
178 描画バッファ、180 情報記憶媒体、192 音出力部、
194 I/F部、195 携帯型情報記憶媒体、196 通信部、
200 HMD(頭部装着型表示装置)、201~203 受光素子、
210 センサ部、220 表示部、231~236 発光素子、
240 処理部、250 処理装置、260 ヘッドバンド、
270 ヘッドホン、280、284 ベースステーション、
281、282、285、286 発光素子 US1, US2, US users,
UC1, UC2 user character (user moving body),
VS1 first virtual space, VS2 second virtual space, DR door,
SG Singularity, FL Playfield, BL Blower,
VB vibration device, DK desk, BS bookshelf, WD1, WD2 window,
TR train, TN tunnel, SL line-of-sight direction,
100 processing units, 102 input processing units, 110 arithmetic processing units,
111 Information acquisition unit, 112 Virtual space setting unit, 113 Mobile object processing unit,
114 virtual camera control unit, 115 game processing unit, 116 notification processing unit,
117 sensation device control unit, 120 display processing unit, 130 sound processing unit,
140 output processing unit, 150 imaging unit, 151, 152 camera,
160 operation unit, 170 storage unit, 172 object information storage unit,
178 Drawing buffer, 180 information storage medium, 192 sound output unit,
194 I / F unit, 195 portable information storage medium, 196 communication unit,
200 HMD (head-mounted display device), 201 to 203 light receiving element,
210 sensor unit, 220 display unit, 231 to 236 light emitting element,
240 processing units, 250 processing devices, 260 headbands,
270 headphones, 280, 284 base station,
281, 282, 285, 286 Light emitting element
UC1、UC2 ユーザキャラクタ(ユーザ移動体)、
VS1 第1の仮想空間、VS2 第2の仮想空間、DR ドア、
SG 特異点、FL プレイフィールド、BL 送風機、
VB 振動デバイス、DK 机、BS 本棚、WD1、WD2 窓、
TR 電車、TN トンネル、SL 視線方向、
100 処理部、102 入力処理部、110 演算処理部、
111 情報取得部、112 仮想空間設定部、113 移動体処理部、
114 仮想カメラ制御部、115 ゲーム処理部、116 報知処理部、
117 体感装置制御部、120 表示処理部、130 音処理部、
140 出力処理部、150 撮像部、151、152 カメラ、
160 操作部、170 記憶部、172 オブジェクト情報記憶部、
178 描画バッファ、180 情報記憶媒体、192 音出力部、
194 I/F部、195 携帯型情報記憶媒体、196 通信部、
200 HMD(頭部装着型表示装置)、201~203 受光素子、
210 センサ部、220 表示部、231~236 発光素子、
240 処理部、250 処理装置、260 ヘッドバンド、
270 ヘッドホン、280、284 ベースステーション、
281、282、285、286 発光素子 US1, US2, US users,
UC1, UC2 user character (user moving body),
VS1 first virtual space, VS2 second virtual space, DR door,
SG Singularity, FL Playfield, BL Blower,
VB vibration device, DK desk, BS bookshelf, WD1, WD2 window,
TR train, TN tunnel, SL line-of-sight direction,
100 processing units, 102 input processing units, 110 arithmetic processing units,
111 Information acquisition unit, 112 Virtual space setting unit, 113 Mobile object processing unit,
114 virtual camera control unit, 115 game processing unit, 116 notification processing unit,
117 sensation device control unit, 120 display processing unit, 130 sound processing unit,
140 output processing unit, 150 imaging unit, 151, 152 camera,
160 operation unit, 170 storage unit, 172 object information storage unit,
178 Drawing buffer, 180 information storage medium, 192 sound output unit,
194 I / F unit, 195 portable information storage medium, 196 communication unit,
200 HMD (head-mounted display device), 201 to 203 light receiving element,
210 sensor unit, 220 display unit, 231 to 236 light emitting element,
240 processing units, 250 processing devices, 260 headbands,
270 headphones, 280, 284 base station,
281, 282, 285, 286 Light emitting element
Claims (15)
- オブジェクトが配置される仮想空間の設定処理を行う仮想空間設定部と、
前記仮想空間において、ユーザに対応するユーザ移動体を移動させる処理を行う移動体処理部と、
前記仮想空間において仮想カメラから見える画像の描画処理を行う表示処理部と、
を含み、
前記仮想空間設定部は、
前記仮想空間として、第1の仮想空間と、前記第1の仮想空間に対して特異点を介して結びつけられる第2の仮想空間とを設定し、
前記表示処理部は、
所与の切替条件が成立する前においては、
前記ユーザ移動体又は前記仮想カメラの位置情報を、前記第1の仮想空間の位置情報として設定し、第1の描画処理として、前記第1の仮想空間の画像に加えて前記第2の仮想空間の画像を描画する処理を行って、前記特異点に対応する領域に前記第2の仮想空間の画像が表示される画像を生成し、
前記ユーザ移動体又は前記仮想カメラが前記特異点に対応する場所を第1の進行方向で通過することで前記切替条件が成立した場合には、
前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第2の仮想空間の位置情報として設定し、第2の描画処理として、前記第2の仮想空間の画像に加えて前記第1の仮想空間の画像を描画する処理を行って、前記特異点に対応する領域に前記第1の仮想空間の画像が表示される画像を生成し、
前記表示処理部は、
前記仮想カメラの視線方向が、前記第1の進行方向の反対方向側を向いた場合に、前記特異点に対応する領域に前記第1の仮想空間の画像が表示される画像を生成することを特徴とするシミュレーションシステム。 A virtual space setting unit that performs setting processing of a virtual space in which the object is arranged;
In the virtual space, a moving body processing unit that performs a process of moving a user moving body corresponding to the user;
A display processing unit that performs drawing processing of an image seen from a virtual camera in the virtual space;
Including
The virtual space setting unit
As the virtual space, a first virtual space and a second virtual space connected to the first virtual space via a singular point are set.
The display processing unit
Before a given switching condition is met,
The position information of the user moving body or the virtual camera is set as the position information of the first virtual space, and as the first drawing processing, the second virtual space is added to the image of the first virtual space. To generate an image in which the image of the second virtual space is displayed in a region corresponding to the singular point,
When the user moving body or the virtual camera passes through the place corresponding to the singular point in the first traveling direction, the switching condition is satisfied,
The position information of the user moving body or the virtual camera is set as position information of the second virtual space, and as the second drawing process, the first virtual space is added to the image of the second virtual space. Performing a process of drawing an image of the space to generate an image in which the image of the first virtual space is displayed in an area corresponding to the singular point;
The display processing unit
Generating an image in which an image of the first virtual space is displayed in a region corresponding to the singular point when a line-of-sight direction of the virtual camera is directed in a direction opposite to the first traveling direction. A featured simulation system. - 請求項1において、
前記表示処理部は、
前記ユーザ移動体又は前記仮想カメラが、前記第1の進行方向とは異なる第2の進行方向で前記特異点を通過した場合には、前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第1の仮想空間の位置情報として設定し、前記第1の描画処理として、少なくとも前記第1の仮想空間の画像を描画する処理を行うことを特徴とするシミュレーションシステム。 In claim 1,
The display processing unit
When the user moving body or the virtual camera passes through the singular point in a second traveling direction different from the first traveling direction, the position information of the user moving body or the virtual camera is A simulation system which is set as position information of a first virtual space, and performs a process of drawing at least an image of the first virtual space as the first drawing process. - 請求項1又は2において、
前記表示処理部は、
前記ユーザ移動体又は前記仮想カメラが前記特異点に対応する場所を通過することで前記切替条件が成立した場合に、前記ユーザ移動体又は前記仮想カメラの前記位置情報を、第3の仮想空間の位置情報として設定し、第3の描画処理として、少なくとも前記第3の仮想空間の画像を描画する処理を行うことを特徴とするシミュレーションシステム。 In claim 1 or 2,
The display processing unit
When the switching condition is satisfied when the user moving body or the virtual camera passes through a location corresponding to the singular point, the position information of the user moving body or the virtual camera is stored in the third virtual space. A simulation system which is set as position information and performs a process of drawing at least an image of the third virtual space as a third drawing process. - 請求項3において、
前記表示処理部は、
複数の前記ユーザがプレイする場合に、複数の前記ユーザに対応する複数の前記ユーザ移動体又は複数の前記仮想カメラが前記特異点に対応する場所を通過して前記第1の仮想空間に戻ったことを条件に、前記第3の描画処理を許可することを特徴とするシミュレーションシステム。 In claim 3,
The display processing unit
When a plurality of users play, the plurality of user moving bodies corresponding to the plurality of users or the plurality of virtual cameras return to the first virtual space through a place corresponding to the singular point. The simulation system is characterized in that the third drawing process is permitted on the condition. - 請求項1乃至4のいずれかにおいて、
前記表示処理部は、
前記ユーザの入力情報又はセンサの検出情報に基づいて、前記切替条件が成立したか否かを判断することを特徴とするシミュレーションシステム。 In any one of Claims 1 thru | or 4,
The display processing unit
A simulation system that determines whether or not the switching condition is satisfied based on input information of the user or detection information of a sensor. - 請求項1乃至5のいずれかにおいて、
前記ユーザが移動する実空間のプレイフィールドには、前記特異点に対応する物体が配置されており、
前記表示処理部は、
前記ユーザが前記実空間において前記物体の場所を通過した場合に、前記ユーザ移動体又は前記仮想カメラが、前記特異点に対応する場所を通過したと判断することを特徴とするシミュレーションシステム。 In any one of Claims 1 thru | or 5,
An object corresponding to the singular point is arranged in the play field of the real space where the user moves,
The display processing unit
When the user passes the location of the object in the real space, it is determined that the user moving body or the virtual camera has passed a location corresponding to the singular point. - 請求項1乃至6のいずれかにおいて、
前記表示処理部は、
前記特異点を設定又は非設定にする処理を行うことを特徴とするシミュレーションシステム。 In any one of Claims 1 thru | or 6.
The display processing unit
The simulation system characterized by performing the process which sets or does not set the said singular point. - 請求項1乃至7のいずれかにおいて、
前記ユーザに仮想現実を体験させるための体感装置を制御する体感装置制御部を含み、
前記体感装置制御部は、
前記切替条件が成立していない場合の前記体感装置の制御と、前記切替条件が成立した場合の前記体感装置の制御を異ならせることを特徴とするシミュレーションシステム。 In any one of Claims 1 thru | or 7,
Including a sensation device control unit for controlling a sensation device for allowing the user to experience virtual reality,
The sensation device controller is
A simulation system characterized in that the control of the sensation apparatus when the switching condition is not satisfied differs from the control of the sensation apparatus when the switching condition is satisfied. - 請求項1乃至8のいずれかにおいて、
複数の前記ユーザがプレイする場合に、実空間でのユーザ間の衝突の報知情報の出力処理を行う報知処理部を含むことを特徴とするシミュレーションシステム。 In any one of Claims 1 thru | or 8.
The simulation system characterized by including the alerting | reporting process part which performs the output process of the alerting | reporting information of the collision between users in real space, when the said several user plays. - 請求項1乃至9のいずれかにおいて、
実空間での前記ユーザの位置情報を取得する情報取得部を含み、
前記移動体処理部は、
取得された前記位置情報に基づいて、前記ユーザ移動体を移動させる処理を行い、
前記表示処理部は、
前記ユーザが装着する頭部装着型表示装置の表示画像を生成し、
前記表示処理部は、
前記切替条件が成立する前においては、
前記実空間での前記ユーザの前記位置情報により特定される前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第1の仮想空間の位置情報として設定し、
前記切替条件が成立した場合には、
前記実空間での前記ユーザの前記位置情報により特定される前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第2の仮想空間の位置情報として設定することを特徴とするシミュレーションシステム。 In any one of Claims 1 thru | or 9,
Including an information acquisition unit for acquiring position information of the user in real space;
The mobile processing unit is
Based on the acquired position information, a process of moving the user moving body is performed,
The display processing unit
Generating a display image of a head-mounted display device worn by the user;
The display processing unit
Before the switching condition is satisfied,
Setting the position information of the user moving body or the virtual camera specified by the position information of the user in the real space as the position information of the first virtual space;
When the switching condition is satisfied,
The simulation system, wherein the position information of the user moving object or the virtual camera specified by the position information of the user in the real space is set as position information of the second virtual space. - 実空間でのユーザの位置情報を取得する情報取得部と、
オブジェクトが配置される仮想空間の設定処理を行う仮想空間設定部と、
取得された前記位置情報に基づいて、前記仮想空間において、ユーザに対応するユーザ移動体を移動させる処理を行う移動体処理部と、
前記仮想空間において仮想カメラから見える画像の描画処理を行い、前記ユーザが装着する頭部装着型表示装置に表示する画像を生成する表示処理部と、
を含み、
前記仮想空間設定部は、
前記仮想空間として、第1の仮想空間と、前記第1の仮想空間に対して特異点を介して結びつけられる第2の仮想空間とを設定し、
前記表示処理部は、
所与の切替条件が成立する前においては、
前記実空間での前記ユーザの前記位置情報により特定される前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第1の仮想空間の位置情報として設定し、第1の描画処理として、少なくとも前記第1の仮想空間の画像を描画する処理を行い、
前記ユーザ移動体又は前記仮想カメラが前記特異点に対応する場所を第1の進行方向で通過することで前記切替条件が成立した場合には、
前記実空間での前記ユーザの前記位置情報により特定される前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第2の仮想空間の位置情報として設定し、第2の描画処理として、少なくとも前記第2の仮想空間の画像を描画する処理を行うことを特徴とするシミュレーションシステム。 An information acquisition unit for acquiring user location information in real space;
A virtual space setting unit that performs setting processing of a virtual space in which the object is arranged;
Based on the acquired position information, in the virtual space, a moving body processing unit that performs a process of moving a user moving body corresponding to the user,
A display processing unit that performs drawing processing of an image seen from a virtual camera in the virtual space, and generates an image to be displayed on a head-mounted display device worn by the user;
Including
The virtual space setting unit
As the virtual space, a first virtual space and a second virtual space connected to the first virtual space via a singular point are set.
The display processing unit
Before a given switching condition is met,
The position information of the user moving object or the virtual camera specified by the position information of the user in the real space is set as the position information of the first virtual space, and at least as the first drawing process, A process of drawing an image of the first virtual space;
When the user moving body or the virtual camera passes through the place corresponding to the singular point in the first traveling direction, the switching condition is satisfied,
The position information of the user moving body or the virtual camera specified by the position information of the user in the real space is set as position information of the second virtual space, and at least as a second drawing process, A simulation system for performing a process of drawing an image of the second virtual space. - オブジェクトが配置される仮想空間を設定する仮想空間設定処理と、
前記仮想空間において、ユーザに対応するユーザ移動体を移動させる移動体処理と、
前記仮想空間において仮想カメラから見える画像の描画処理を行う表示処理と、
を行い、
前記仮想空間設定処理において、
前記仮想空間として、第1の仮想空間と、前記第1の仮想空間に対して特異点を介して結びつけられる第2の仮想空間とを設定し、
前記表示処理において、
所与の切替条件が成立する前は、
前記ユーザ移動体又は前記仮想カメラの位置情報を、前記第1の仮想空間の位置情報として設定し、第1の描画処理として、前記第1の仮想空間の画像に加えて前記第2の仮想空間の画像を描画する処理を行って、前記特異点に対応する領域に前記第2の仮想空間の画像が表示される画像を生成し、
前記ユーザ移動体又は前記仮想カメラが前記特異点に対応する場所を第1の進行方向で通過することで前記切替条件が成立した場合には、
前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第2の仮想空間の位置情報として設定し、第2の描画処理として、前記第2の仮想空間の画像に加えて前記第1の仮想空間の画像を描画する処理を行って、前記特異点に対応する領域に前記第1の仮想空間の画像が表示される画像を生成し、
前記表示処理において、
前記仮想カメラの視線方向が、前記第1の進行方向の反対方向側を向いた場合に、前記特異点に対応する領域に前記第1の仮想空間の画像が表示される画像を生成することを特徴とする画像処理方法。 A virtual space setting process for setting a virtual space in which an object is placed;
In the virtual space, a moving object process for moving a user moving object corresponding to the user,
Display processing for performing drawing processing of an image seen from a virtual camera in the virtual space;
And
In the virtual space setting process,
As the virtual space, a first virtual space and a second virtual space connected to the first virtual space via a singular point are set.
In the display process,
Before a given switching condition is met,
The position information of the user moving body or the virtual camera is set as the position information of the first virtual space, and as the first drawing processing, the second virtual space is added to the image of the first virtual space. To generate an image in which the image of the second virtual space is displayed in a region corresponding to the singular point,
When the user moving body or the virtual camera passes through the place corresponding to the singular point in the first traveling direction, the switching condition is satisfied,
The position information of the user moving body or the virtual camera is set as position information of the second virtual space, and as the second drawing process, the first virtual space is added to the image of the second virtual space. Performing a process of drawing an image of the space to generate an image in which the image of the first virtual space is displayed in an area corresponding to the singular point;
In the display process,
Generating an image in which an image of the first virtual space is displayed in a region corresponding to the singular point when a line-of-sight direction of the virtual camera is directed in a direction opposite to the first traveling direction. A featured image processing method. - 実空間でのユーザの位置情報を取得する情報取得処理と、
オブジェクトが配置される仮想空間を設定する仮想空間設定処理と、
取得された前記位置情報に基づいて、前記仮想空間において、ユーザに対応するユーザ移動体を移動させる移動体処理と、
前記仮想空間において仮想カメラから見える画像の描画処理を行い、前記ユーザが装着する頭部装着型表示装置に表示する画像を生成する表示処理と、
を行い、
前記仮想空間設定処理において、
前記仮想空間として、第1の仮想空間と、前記第1の仮想空間に対して特異点を介して結びつけられる第2の仮想空間とを設定し、
前記表示処理において、
所与の切替条件が成立する前は、
前記実空間での前記ユーザの前記位置情報により特定される前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第1の仮想空間の位置情報として設定し、第1の描画処理として、少なくとも前記第1の仮想空間の画像を描画する処理を行い、
前記ユーザ移動体又は前記仮想カメラが前記特異点に対応する場所を第1の進行方向で通過することで前記切替条件が成立した場合には、
前記実空間での前記ユーザの前記位置情報により特定される前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第2の仮想空間の位置情報として設定し、第2の描画処理として、少なくとも前記第2の仮想空間の画像を描画する処理を行うことを特徴とする画像処理方法。 An information acquisition process for acquiring user location information in real space;
A virtual space setting process for setting a virtual space in which an object is placed;
Based on the acquired position information, a moving body process for moving a user moving body corresponding to the user in the virtual space;
A display process for rendering an image visible from a virtual camera in the virtual space, and generating an image to be displayed on a head-mounted display device worn by the user;
And
In the virtual space setting process,
As the virtual space, a first virtual space and a second virtual space connected to the first virtual space via a singular point are set.
In the display process,
Before a given switching condition is met,
The position information of the user moving object or the virtual camera specified by the position information of the user in the real space is set as the position information of the first virtual space, and at least as the first drawing process, A process of drawing an image of the first virtual space;
When the user moving body or the virtual camera passes through the place corresponding to the singular point in the first traveling direction, the switching condition is satisfied,
The position information of the user moving body or the virtual camera specified by the position information of the user in the real space is set as position information of the second virtual space, and at least as a second drawing process, An image processing method comprising performing a process of drawing an image of the second virtual space. - オブジェクトが配置される仮想空間の設定処理を行う仮想空間設定部と、
前記仮想空間において、ユーザに対応するユーザ移動体を移動させる処理を行う移動体処理部と、
前記仮想空間において仮想カメラから見える画像の描画処理を行う表示処理部として、
コンピュータを機能させるプログラムを記憶し、
前記仮想空間設定部は、
前記仮想空間として、第1の仮想空間と、前記第1の仮想空間に対して特異点を介して結びつけられる第2の仮想空間とを設定し、
前記表示処理部は、
所与の切替条件が成立する前においては、
前記ユーザ移動体又は前記仮想カメラの位置情報を、前記第1の仮想空間の位置情報として設定し、第1の描画処理として、前記第1の仮想空間の画像に加えて前記第2の仮想空間の画像を描画する処理を行って、前記特異点に対応する領域に前記第2の仮想空間の画像が表示される画像を生成し、
前記ユーザ移動体又は前記仮想カメラが前記特異点に対応する場所を第1の進行方向で通過することで前記切替条件が成立した場合には、
前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第2の仮想空間の位置情報として設定し、第2の描画処理として、前記第2の仮想空間の画像に加えて前記第1の仮想空間の画像を描画する処理を行って、前記特異点に対応する領域に前記第1の仮想空間の画像が表示される画像を生成し、
前記表示処理部は、
前記仮想カメラの視線方向が、前記第1の進行方向の反対方向側を向いた場合に、前記特異点に対応する領域に前記第1の仮想空間の画像が表示される画像を生成することを特徴とするコンピュータ読み取り可能な情報記憶媒体。 A virtual space setting unit that performs setting processing of a virtual space in which the object is arranged;
In the virtual space, a moving body processing unit that performs a process of moving a user moving body corresponding to the user;
As a display processing unit that performs drawing processing of an image seen from a virtual camera in the virtual space,
Memorize the program that makes the computer work,
The virtual space setting unit
As the virtual space, a first virtual space and a second virtual space connected to the first virtual space via a singular point are set.
The display processing unit
Before a given switching condition is met,
The position information of the user moving body or the virtual camera is set as the position information of the first virtual space, and as the first drawing processing, the second virtual space is added to the image of the first virtual space. To generate an image in which the image of the second virtual space is displayed in a region corresponding to the singular point,
When the user moving body or the virtual camera passes through the place corresponding to the singular point in the first traveling direction, the switching condition is satisfied,
The position information of the user moving body or the virtual camera is set as position information of the second virtual space, and as the second drawing process, the first virtual space is added to the image of the second virtual space. Performing a process of drawing an image of the space to generate an image in which the image of the first virtual space is displayed in an area corresponding to the singular point;
The display processing unit
Generating an image in which an image of the first virtual space is displayed in a region corresponding to the singular point when a line-of-sight direction of the virtual camera is directed in a direction opposite to the first traveling direction. A computer-readable information storage medium. - 実空間でのユーザの位置情報を取得する情報取得部と、
オブジェクトが配置される仮想空間の設定処理を行う仮想空間設定部と、
取得された前記位置情報に基づいて、前記仮想空間において、ユーザに対応するユーザ移動体を移動させる処理を行う移動体処理部と、
前記仮想空間において仮想カメラから見える画像の描画処理を行い、前記ユーザが装着する頭部装着型表示装置に表示する画像を生成する表示処理部として、
コンピュータを機能させるプログラムを記憶し、
前記仮想空間設定部は、
前記仮想空間として、第1の仮想空間と、前記第1の仮想空間に対して特異点を介して結びつけられる第2の仮想空間とを設定し、
前記表示処理部は、
所与の切替条件が成立する前においては、
前記実空間での前記ユーザの前記位置情報により特定される前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第1の仮想空間の位置情報として設定し、第1の描画処理として、少なくとも前記第1の仮想空間の画像を描画する処理を行い、 前記ユーザ移動体又は前記仮想カメラが前記特異点に対応する場所を第1の進行方向で通過することで前記切替条件が成立した場合には、
前記実空間での前記ユーザの前記位置情報により特定される前記ユーザ移動体又は前記仮想カメラの前記位置情報を、前記第2の仮想空間の位置情報として設定し、第2の描画処理として、少なくとも前記第2の仮想空間の画像を描画する処理を行うことを特徴とするコンピュータ読み取り可能な情報記憶媒体。 An information acquisition unit for acquiring user location information in real space;
A virtual space setting unit that performs setting processing of a virtual space in which the object is arranged;
Based on the acquired position information, in the virtual space, a moving body processing unit that performs a process of moving a user moving body corresponding to the user,
As a display processing unit that performs drawing processing of an image seen from a virtual camera in the virtual space and generates an image to be displayed on a head-mounted display device worn by the user,
Memorize the program that makes the computer work,
The virtual space setting unit
As the virtual space, a first virtual space and a second virtual space connected to the first virtual space via a singular point are set.
The display processing unit
Before a given switching condition is met,
The position information of the user moving object or the virtual camera specified by the position information of the user in the real space is set as the position information of the first virtual space, and at least as the first drawing process, When a process of drawing an image of the first virtual space is performed, and the switching condition is satisfied when the user moving body or the virtual camera passes through a place corresponding to the singular point in a first traveling direction. Is
The position information of the user moving body or the virtual camera specified by the position information of the user in the real space is set as position information of the second virtual space, and at least as a second drawing process, A computer-readable information storage medium that performs a process of drawing an image of the second virtual space.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016-256772 | 2016-12-28 | ||
JP2016256772A JP6761340B2 (en) | 2016-12-28 | 2016-12-28 | Simulation system and program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2018124280A1 true WO2018124280A1 (en) | 2018-07-05 |
Family
ID=62709645
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2017/047250 WO2018124280A1 (en) | 2016-12-28 | 2017-12-28 | Simulation system, image processing method, and information storage medium |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP6761340B2 (en) |
WO (1) | WO2018124280A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163977A (en) * | 2018-07-13 | 2019-08-23 | 腾讯数码(天津)有限公司 | Virtual channel rendering method and device in more world's virtual scenes |
EP3968143A1 (en) * | 2020-09-15 | 2022-03-16 | Nokia Technologies Oy | Audio processing |
CN118451473A (en) * | 2021-12-22 | 2024-08-06 | 株式会社Celsys | Image generation method and image generation program |
Families Citing this family (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020026419A1 (en) * | 2018-08-02 | 2020-02-06 | 株式会社ソニー・インタラクティブエンタテインメント | Image generation device and image generation method |
US11373379B2 (en) * | 2018-08-23 | 2022-06-28 | Sony Interactive Entertainment Inc. | Image generation apparatus and image generation method for generating augmented reality images based on user interaction |
US11826648B2 (en) | 2019-01-30 | 2023-11-28 | Sony Group Corporation | Information processing apparatus, information processing method, and recording medium on which a program is written |
US10978019B2 (en) | 2019-04-15 | 2021-04-13 | XRSpace CO., LTD. | Head mounted display system switchable between a first-person perspective mode and a third-person perspective mode, related method and related non-transitory computer readable storage medium |
JP7351638B2 (en) * | 2019-04-23 | 2023-09-27 | 株式会社ソニー・インタラクティブエンタテインメント | Image generation device, image display system, and information presentation method |
US20220291744A1 (en) * | 2019-09-03 | 2022-09-15 | Sony Group Corporation | Display processing device, display processing method, and recording medium |
JP7561009B2 (en) | 2020-11-16 | 2024-10-03 | 任天堂株式会社 | Information processing system, information processing program, information processing device, and information processing method |
JP7467810B2 (en) * | 2021-05-07 | 2024-04-16 | Kyoto’S 3D Studio株式会社 | Mixed reality providing system and method |
JP7324469B2 (en) | 2021-06-28 | 2023-08-10 | グリー株式会社 | Information processing system, information processing method, information processing program |
JP6989199B1 (en) | 2021-10-06 | 2022-01-05 | クラスター株式会社 | Information processing equipment |
US12026846B2 (en) * | 2021-11-03 | 2024-07-02 | Granden Corp. | Location-based metaverse social system combining the real world with virtual worlds for virtual reality interaction |
JP7449508B2 (en) | 2022-07-14 | 2024-03-14 | グリー株式会社 | Information processing system, information processing method, and program |
WO2024101038A1 (en) * | 2022-11-11 | 2024-05-16 | 株式会社Nttドコモ | Avatar moving device |
WO2024111123A1 (en) * | 2022-11-25 | 2024-05-30 | 株式会社Abal | Virtual space experience system and virtual space experience method |
KR20240084992A (en) * | 2022-12-07 | 2024-06-14 | 삼성전자주식회사 | Wearable device for displaying visual objects for entering multiple virtual spaces and method thereof |
WO2024161463A1 (en) * | 2023-01-30 | 2024-08-08 | 株式会社ソニー・インタラクティブエンタテインメント | Metaverse management device, metaverse management method, and computer program |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006301924A (en) * | 2005-04-20 | 2006-11-02 | Canon Inc | Image processing method and image processing apparatus |
JP2016062486A (en) * | 2014-09-19 | 2016-04-25 | 株式会社ソニー・コンピュータエンタテインメント | Image generation device and image generation method |
-
2016
- 2016-12-28 JP JP2016256772A patent/JP6761340B2/en active Active
-
2017
- 2017-12-28 WO PCT/JP2017/047250 patent/WO2018124280A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2006301924A (en) * | 2005-04-20 | 2006-11-02 | Canon Inc | Image processing method and image processing apparatus |
JP2016062486A (en) * | 2014-09-19 | 2016-04-25 | 株式会社ソニー・コンピュータエンタテインメント | Image generation device and image generation method |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163977A (en) * | 2018-07-13 | 2019-08-23 | 腾讯数码(天津)有限公司 | Virtual channel rendering method and device in more world's virtual scenes |
US11263814B2 (en) * | 2018-07-13 | 2022-03-01 | Tencent Technology (Shenzhen) Company Limited | Method, apparatus, and storage medium for rendering virtual channel in multi-world virtual scene |
CN110163977B (en) * | 2018-07-13 | 2024-04-12 | 腾讯数码(天津)有限公司 | Virtual channel rendering method and device in multi-world virtual scene |
EP3968143A1 (en) * | 2020-09-15 | 2022-03-16 | Nokia Technologies Oy | Audio processing |
US11647350B2 (en) | 2020-09-15 | 2023-05-09 | Nokia Technologies Oy | Audio processing |
CN118451473A (en) * | 2021-12-22 | 2024-08-06 | 株式会社Celsys | Image generation method and image generation program |
Also Published As
Publication number | Publication date |
---|---|
JP6761340B2 (en) | 2020-09-23 |
JP2018109835A (en) | 2018-07-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018124280A1 (en) | Simulation system, image processing method, and information storage medium | |
JP6754678B2 (en) | Simulation system and program | |
US11094106B2 (en) | Simulation system, processing method, and information storage medium for changing a display object in response to a movement of a field of view | |
JP6306442B2 (en) | Program and game system | |
WO2018012395A1 (en) | Simulation system, processing method, and information storage medium | |
US10183220B2 (en) | Image generation device and image generation method | |
US11738270B2 (en) | Simulation system, processing method, and information storage medium | |
JP6298130B2 (en) | Simulation system and program | |
JP2019175323A (en) | Simulation system and program | |
JP7144796B2 (en) | Simulation system and program | |
JP2019152899A (en) | Simulation system and program | |
CN112104857A (en) | Image generation system, image generation method, and information storage medium | |
JP6774260B2 (en) | Simulation system | |
JP6794390B2 (en) | Simulation system and program | |
JP6622832B2 (en) | Program and game system | |
JP7104539B2 (en) | Simulation system and program | |
JP6918189B2 (en) | Simulation system and program | |
JP2018171309A (en) | Simulation system and program | |
JP6660321B2 (en) | Simulation system and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17889153 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17889153 Country of ref document: EP Kind code of ref document: A1 |