[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2024111123A1 - Virtual space experience system and virtual space experience method - Google Patents

Virtual space experience system and virtual space experience method Download PDF

Info

Publication number
WO2024111123A1
WO2024111123A1 PCT/JP2022/043614 JP2022043614W WO2024111123A1 WO 2024111123 A1 WO2024111123 A1 WO 2024111123A1 JP 2022043614 W JP2022043614 W JP 2022043614W WO 2024111123 A1 WO2024111123 A1 WO 2024111123A1
Authority
WO
WIPO (PCT)
Prior art keywords
area
virtual
real
virtual space
user
Prior art date
Application number
PCT/JP2022/043614
Other languages
French (fr)
Japanese (ja)
Inventor
良哉 尾小山
Original Assignee
株式会社Abal
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Abal filed Critical 株式会社Abal
Priority to PCT/JP2022/043614 priority Critical patent/WO2024111123A1/en
Publication of WO2024111123A1 publication Critical patent/WO2024111123A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention relates to a virtual space experience system and a virtual space experience method that allow a user to experience a virtual space via an environment output device that allows the user to recognize the environment of the virtual space.
  • HMD head-mounted display
  • This type of virtual space experience system uses a motion capture device or the like to recognize the user's position and movement in real space (i.e., coordinate movement and posture change, etc.), and then moves and operates an avatar corresponding to the user in the virtual space according to the recognized position and movement (see, for example, Patent Document 1).
  • the present invention has been made in consideration of the above points, and aims to provide a system and method for experiencing a virtual space in which the shape and size of the virtual space are not likely to be restricted by the shape and size of the real space, making it easy to maintain a sense of immersion.
  • the virtual space experience system of the present invention comprises: a virtual space generation unit that generates a virtual space corresponding to a real space in which a user exists; an avatar generation unit that generates an avatar corresponding to the user in the virtual space; A user state recognition unit that recognizes the position and action of the user; an avatar state control unit that controls a position and a movement of the avatar based on a position and a movement of the user; an environment determination unit that determines an environment in the virtual space to be recognized by the user based on a position and a movement of the avatar; A virtual space experiencing system that allows a user to experience a virtual space via an environment output device that outputs an environment of the virtual space, the virtual space generation unit generates, in the virtual space, a first virtual area corresponding to a first real area in the real space and a second virtual area corresponding to a second real area in the real space adjacent to the first real area; A positional relationship between the first virtual area and the second virtual area is different from a positional relationship between the first real area and the second real
  • the virtual space includes a first virtual area corresponding to a first real area in real space, and a second virtual area corresponding to a second real area in real space adjacent to the first real area.
  • a first virtual area corresponding to a first real area in real space
  • a second virtual area corresponding to a second real area in real space adjacent to the first real area.
  • one specific area in real space is divided, and an independent virtual area is assigned to each of the divided areas.
  • the positional relationship between the first virtual area and the second virtual area is made different from the positional relationship between the first real area and the second real area.
  • this system makes it possible to make the size and shape of the entire virtual space different from the size and shape of the real space depending on the position of the virtual area (i.e., how it is arranged). For example, if two virtual areas are positioned offset in the vertical direction, the height of the entire virtual space can be made higher than the height of the corresponding entire real space.
  • this system makes it possible for the user to experience a wide variety of virtual spaces that would not be possible in real space.
  • the shape and size of the virtual space are made different from the shape and size of the corresponding real space by changing the positional relationship of the virtual regions in this way, the correspondence between the amount of movement and action of the user and the amount of movement and action of the avatar can be maintained constant, unlike when the shape and size of the virtual space are made different from the shape and size of the corresponding real space by transforming the shape and size of the virtual space.
  • this system makes it possible for the user to feel less uncomfortable between their own movements and actions and those of their avatar, allowing them to maintain the awareness that they are present in a virtual space (i.e., a sense of immersion).
  • the first virtual area and the second virtual area are spaced apart from each other, It is preferable that a third virtual area is disposed between the first virtual area and the second virtual area, the third virtual area corresponding to a third real area that does not correspond to the real space or is independent of the first real area and the second real area.
  • an edge of the first virtual area corresponds to an edge of the second real area.
  • first virtual area and the second virtual area are positioned apart from each other and a third virtual area is positioned between them, when the user moves from the first real area to the second real area, the avatar will move from the first virtual area to the second virtual area, jumping over the third virtual area.
  • the sudden movement of the avatar may give the user a sense of discomfort and hinder the sense of immersion.
  • the edge of the first virtual area corresponds to the edge of the second real area.
  • the edge of the first virtual area corresponds to the edge of the second real area
  • the color tone of the edge of the first virtual area is different from the color tone of the other part of the first virtual area.
  • This configuration allows the user to easily recognize the boundary portion (i.e., the boundary portion of the first area) before the avatar enters the edge. This makes the user aware that some kind of change will occur at the boundary portion. This makes it less likely that the user will feel uncomfortable even when the avatar enters the boundary portion and causes a sudden movement of the avatar, making it less likely that the sense of immersion will be hindered.
  • the edge of the first virtual area corresponds to the edge of the second real area
  • the portion of the avatar located at the edge of the first virtual area has a form different from other portions of the avatar.
  • the user can easily recognize the boundary portion. In turn, the user can be made aware that some kind of change will occur at the boundary portion. This makes it less likely that the user will feel uncomfortable even when the avatar enters the boundary portion and causes a sudden movement of the avatar, making it less likely that the sense of immersion will be hindered.
  • a correspondence relationship between the coordinate axes of the first real area and the coordinate axes of the first virtual area may be different from a correspondence relationship between the coordinate axes of the second real area and the coordinate axes of the second virtual area.
  • the virtual space is a space that does not exist in real space. For example, if the coordinate axes of the first virtual realm are made to coincide with those of the first real realm and the coordinate axes of the second virtual realm are made to be upside down compared to those of the second real realm, then it is possible to realize a virtual space in which the top and bottom are reversed by moving from the first virtual realm to the second virtual realm.
  • the virtual space experiencing method of the present invention comprises: A step in which a virtual space generation unit generates a virtual space corresponding to a real space in which a user exists; an avatar generation unit generating an avatar corresponding to the user in the virtual space; A user state recognition unit recognizes a position and a movement of the user; an avatar state control unit controlling a position and a movement of the avatar based on a position and a movement of the user; an environment determination unit determining an environment in a virtual space to be recognized by the user based on a position and a movement of the avatar; a step of outputting an environment of the virtual space to the user by an environment output device, the virtual space generation unit generates, in the virtual space, a first virtual area corresponding to a first real area in the real space and a second virtual area corresponding to a second real area in the real space adjacent to the first real area; A positional relationship between the first virtual area and the second virtual area is different from a positional relationship between the first real area and the second real area.
  • FIG. 1 is a schematic diagram showing a general configuration of a VR system according to a first embodiment.
  • FIG. 2 is a block diagram showing the configuration of the VR system shown in FIG. 1 .
  • FIG. 2 is a perspective view showing the state of real space and virtual space when the VR system of FIG. 1 is used.
  • 4 is a flowchart showing a process executed by the VR system of FIG. 1 .
  • 13 is a perspective view showing the state of real space and virtual space when using the VR system of the second embodiment.
  • FIG. FIG. 13 is a perspective view showing the state of real space and virtual space when using the VR system of the third embodiment.
  • the VR system S allows a user U present in a real space RS (e.g., a room) to recognize the environment (e.g., images, sounds, etc.) of a virtual space VS1 that corresponds to the real space RS, and also makes the user U recognize that he or she exists in the virtual space VS1 by moving or moving an avatar A corresponding to the user U in the virtual space VS1 to correspond to the user U (see Figure 3, etc.).
  • a real space RS e.g., a room
  • the environment e.g., images, sounds, etc.
  • the virtual space experience system of the present invention is not limited to such a configuration, and the number of users may be two or more.
  • the VR system S includes a number of signs 1 that are attached to a user U in a real space RS, a camera 2 that photographs the user U (or, more precisely, the signs 1 attached to the user U), a server 3 that determines images and sounds in a virtual space VS1 (see FIG. 3), and a head-mounted display (hereinafter referred to as "HMD 4") that allows the user to recognize the determined images and sounds.
  • HMD 4 head-mounted display
  • the camera 2, server 3, and HMD 4 can wirelessly transmit and receive information between each other via the Internet network, public lines, short-distance wireless communication, etc. However, any of them may also be configured to transmit and receive information between each other via wires.
  • the multiple signs 1 are attached to the user U's head, both hands, and both feet via the HMD 4, gloves, and shoes worn by the user U.
  • the multiple signs 1 are used to recognize the amount of movement and action of the user U in the real space RS, as described below. Therefore, the positions at which the signs 1 are attached, the number of signs 1 attached, etc. may be changed as appropriate depending on the other devices that make up the VR system S.
  • Camera 2 is installed so that it can capture the user U's range of motion (i.e., the range within which user U can move and act) in the real space RS in which user U exists from multiple directions.
  • the server 3 recognizes the sign 1 from the image captured by the camera 2, and recognizes the coordinates and posture (and thus the amount of movement and action) of the user U based on the position of the recognized sign 1 in the real space RS.
  • the server 3 also determines the environment of the virtual space VS1 (e.g., images, sounds, etc.) that the user U will recognize based on the coordinates and posture.
  • the HMD4 is an environment output device that outputs the environment of the virtual space VS1 to the user so that the user can recognize it.
  • the HMD4 is worn on the head of the user U.
  • the HMD4 has a monitor 40 that allows the user U to recognize an image of the virtual space VS1 determined by the server 3, and a speaker 41 that allows the user U to recognize the sound of the virtual space VS1 determined by the server 3 (see Figure 2).
  • the user U When experiencing the virtual space VS1 using the VR system S, the user U is made to perceive only the images and sounds of the virtual space VS1 via the HMD 4, and is made to perceive that he or she is present in the virtual space VS1.
  • the VR system S is configured as a so-called immersive system.
  • the virtual space experience system of the present invention is not limited to such a configuration.
  • a configuration in which the number and arrangement of signs and cameras are different from the above configuration may be used.
  • feature points may be recognized from the image of the user itself to recognize the user's posture and coordinates.
  • a motion capture device instead of a motion capture device, another device may be used to recognize the user's state.
  • a sensor such as a GPS may be mounted on the HMD, and the user's coordinates, posture, etc. may be recognized based on the output from the sensor.
  • a sensor may be used in combination with the motion capture device described above.
  • the server 3 is composed of one or more electronic circuit units including a CPU, RAM, ROM, interface circuits, etc. As shown in FIG. 2, the server 3 has a virtual environment generation unit 30, a user state recognition unit 31, an avatar state control unit 32, and an environment determination unit 33 as functions (processing units) realized by the implemented hardware configuration or programs.
  • the virtual environment generation unit 30 has a virtual space generation unit 30a and an avatar generation unit 30b.
  • the virtual space generation unit 30a generates a virtual space VS1 that corresponds to the real space RS in which the user U exists. Specifically, the virtual space generation unit 30a generates images that represent the background of the virtual space VS and objects that exist in the virtual space VS, as well as sounds associated with these images.
  • the VR system S of this embodiment does not have such a feature
  • the virtual space experience system has a feature for realizing a specific sensation (such as a cushion with variable hardness) or a feature for generating a specific smell
  • the virtual space generation unit may generate the virtual space using the sensation and smell in addition to the images and sounds.
  • the avatar generation unit 30b generates an avatar A corresponding to the user U in the virtual space VS1 (see Figure 3).
  • the state of the avatar A in the virtual space VS1 changes in response to changes in the state of the corresponding user U in the real space RS.
  • multiple avatars A may be generated for each user U.
  • the user state recognition unit 31 recognizes image data of the user U, including the sign 1 captured by the camera 2, and recognizes the state of the user U in the real space RS (position and movement, i.e., movement of coordinates, change of posture, etc.) based on the image data.
  • the user state recognition unit 31 has a user posture recognition unit 31a and a user coordinate recognition unit 31b.
  • the user posture recognition unit 31a extracts a sign 1 from the recognized image data of the user U, and recognizes the posture of the user U, including the orientation of each part of the body, based on the extraction result.
  • the user coordinate recognition unit 31b extracts sign 1 from the image data of the recognized user U, and recognizes the coordinates of user U based on the extraction results.
  • the avatar state control unit 32 controls the state (e.g., movement of coordinates, change of posture, etc.) of the avatar A corresponding to the user U in the virtual space VS1 based on the posture of the user U in the real space RS recognized by the user posture recognition unit 31a and the coordinates of the user U in the real space RS recognized by the user coordinate recognition unit 31b.
  • state e.g., movement of coordinates, change of posture, etc.
  • the environment determination unit 33 determines the environment of avatar A in the virtual space VS1 based on the state of avatar A (e.g., coordinates, posture, etc. at that time).
  • avatar environment refers to things that affect the avatar in the virtual space.
  • avatar environment refers to the state of objects in the virtual space relative to the state of the avatar (e.g., position, posture, etc.).
  • the environment determination unit 33 determines the environment (images and sounds) of the virtual space VS1 that the user U corresponding to that avatar A will recognize through the monitor 40 and speaker 41 of the HMD 4.
  • the environment that the user is made to perceive refers to the environment in the virtual space that the user is made to perceive through the five senses.
  • the environment that the user is made to perceive refers to the images and sounds of the virtual space around the avatar that corresponds to the user.
  • virtual space images here include images of the background of the virtual space, as well as images of other avatars, images of objects that exist only in the virtual space, and images of objects that exist in the virtual space that correspond to the real world.
  • each processing unit constituting the virtual space experience system of the present invention is not limited to the configuration described above.
  • part of the processing unit provided in the server 3 in this embodiment may be provided in the HMD 4.
  • it may be configured using multiple servers, or the server may be omitted and the CPU mounted on the HMD may work in cooperation.
  • speakers other than those mounted on the HMD may be provided.
  • devices that affect the senses of sight and hearing such as producing smells, wind, etc. that correspond to the virtual space, may also be included.
  • the virtual space VS1 is configured as a rectangular parallelepiped space overall.
  • the virtual space VS1 is composed of a first virtual area V1a (area bounded by a dashed line) which is a rectangular parallelepiped area located at one end of the entire space, a second virtual area V1b (area bounded by a dashed line) which is a rectangular parallelepiped area located at the other end of the entire space and at a position separated from the first virtual area V1a, and a third virtual area V1c (area bounded by a dashed line) which is a rectangular parallelepiped area located between the first virtual area VS1a and the second virtual area V1b.
  • a first virtual area V1a area bounded by a dashed line
  • V1b area bounded by a dashed line
  • V1c area bounded by a dashed line
  • the first virtual area V1a is generated as an area corresponding to the entire first real area Ra of the real space RS and the area of the edge of the second real area Rb on the first real area Ra side (upper left side in Figure 3). Therefore, the shape of the part of the first virtual space V1a excluding the edge (first overlapping area V1d) and the shape of the first real area Ra are the same or similar.
  • the second virtual area V1b is generated as an area corresponding to the entire second real area Rb of the real space RS and the area of the edge of the first real area Ra on the second real area Rb side (the lower right side in Figure 3). Therefore, the shape of the part of the second virtual space V1b excluding the edge (second overlap area V1e) and the shape of the second real area Rb are the same or similar.
  • the third virtual area V1c is generated as an area that does not correspond to any area in the real space RS. Because the virtual space VS1 includes the third virtual space V1c, the overall size of the virtual space VS1 is larger than the size of the corresponding real space RS. Note that the third virtual area V1c may also be generated as an area that corresponds to an area in a real space that is independent of the real space RS (a third real area).
  • the state of the avatar A corresponding to the user U also changes in response to a change in the state of the user U in the real space RS.
  • the user U can move from the first real area Ra to the second real area Rb, jumping over the third virtual area V1c in which the object O exists, and move to the second virtual area V1b, and then by simply looking back after that movement, the user U can observe the object O from the other side.
  • one specific area in the real space RS is divided into two adjacent areas (a first real area Ra and a second real area Rb), and an independent virtual area (a first virtual area V1a and a second virtual area V1b) is assigned to each of the divided areas.
  • the positional relationship between the first virtual area V1a and the second virtual area V1b (i.e., the shape and size of the virtual space VS1) is made different from the positional relationship between the first real area Ra and the second real area Rb (i.e., the shape and size of the real space RS).
  • this VR system S allows the user U to experience a variety of virtual spaces that would not be possible in the real world.
  • the correspondence between the amount of movement and action of the user U and the amount of movement and action of the avatar A can be maintained constant, unlike when the shape and size of the virtual space is made different from the shape and size of the corresponding real space by deforming the shape and size of the virtual space.
  • this VR system S makes it possible to reduce the sense of incongruity felt by the user U between his or her own movements and actions and those of avatar A, allowing the user U to maintain the awareness that he or she is present in a virtual space (i.e., a sense of immersion).
  • the first virtual area V1a is configured as an area that corresponds not only to the first real area Ra but also to the edge of the second real area Rb on the first real area Ra side.
  • the area on the second virtual area V1b side of the first virtual area V1a corresponds to the edge of the second real area Rb on the first real area Ra side.
  • the second virtual area V1b is configured as an area that corresponds not only to the second real area Rb but also to the edge of the first real area Ra on the second real area Rb side.
  • the area on the first virtual area V1a side of the second virtual area V1b corresponds to the edge of the first real area Ra on the second real area Rb side.
  • the virtual space experience system of the present invention is not limited to this configuration, and it is not necessary to provide such an overlapping area in the virtual space. Therefore, for example, only one of the first overlapping area V1d and the second overlapping area V1e in this embodiment may be generated, or neither overlapping area may be generated.
  • the color tone of the floor surface in the first overlapping area V1d of the first virtual area V1a is made different from the color tone of the floor surface in other parts of the first virtual area V1a.
  • the color tone of the floor surface in the second overlapping area V1e of the second virtual area V1b is made different from the color tone of the floor surface in other parts of the second virtual area V1b.
  • the virtual space experience system of the present invention is not limited to this configuration, and the color tone of the floor surface in the overlapping area does not necessarily have to be different from the color tone of the floor surface in the other area. Therefore, the color tone of only one of the overlapping areas may be different from the color tone of the other area. Furthermore, the color tone of not only the floor surface but also the entire overlapping area may be different from the color tone of the entire other area. Furthermore, the color tone of the overlapping area does not have to be different from the color tone of the other area.
  • the VR system S is configured so that avatar A is semi-transparent in the first overlapping area V1d and the second overlapping area V1e.
  • the virtual space experience system of the present invention is not limited to this configuration, and the avatars do not necessarily have to be semi-transparent in the overlapping area. Therefore, the color, shape, etc. of the avatars may be made different. Furthermore, the form of the avatars may be made different only in one of the overlapping areas. Furthermore, the form of the avatars does not have to be made different in the overlapping areas.
  • the virtual environment generation unit 30 of the server 3 generates a virtual space VS1 and an avatar A (FIG. 4/STEP 100).
  • the virtual space generation unit 30a of the virtual environment generation unit 30 generates an image that serves as the background of the virtual space VS1 and an object O that exists in the virtual space VS1.
  • the avatar generation unit 30b of the virtual environment generation unit 30 generates an avatar A that corresponds to the user U.
  • the avatar state control unit 32 of the server 3 determines the state of avatar A based on the state of user U ( Figure 4/STEP 101).
  • the state of user U in the processing from STEP 101 onwards is the state recognised by the user state recognition unit 31 of the server 3 based on the image data captured by the camera 2.
  • the environment determination unit 33 of the server 3 determines the environment of avatar A based on the state of avatar A (FIG. 4/STEP 102).
  • the environment determination unit 33 determines the environment that the user U will recognize based on the environment of avatar A ( Figure 4/STEP 103).
  • the environment determination unit 33 determines the images and sounds of the virtual space VS1 that represents the environment of avatar A as the environment that the user U is to recognize.
  • the HMD 4 worn by the user U outputs the determined environment ( Figure 4/STEP 104).
  • the HMD 4 displays the determined image on a monitor 40 mounted on the HMD 4, and generates the determined sound from a speaker 41 mounted on the HMD 4.
  • the user state recognition unit 31 of the server 3 determines whether the user U has performed any action (FIG. 4/STEP 105).
  • the server 3 determines whether or not it has recognized a signal instructing the end of processing (FIG. 4/STEP 106).
  • the VR system S1 If the VR system S1 does not recognize the signal instructing termination (NO in STEP 106), it returns to STEP 105 and executes the processing from STEP 105 onwards again.
  • the VR system of this embodiment has a similar configuration to the VR system S1 of the first embodiment, except that the shape of the virtual space VS2 generated by the virtual space generation unit is different from the shape of the virtual space VS1 generated by the virtual space generation unit 30a of the VR system S of the first embodiment.
  • the virtual space VS2 is composed of two rectangular parallelepiped regions spaced apart from each other.
  • the virtual space VS2 is composed of a first virtual area V2a (area bounded by a dashed line) which is a rectangular parallelepiped area, and a second virtual area V2b (space bounded by a dashed line) which is a rectangular parallelepiped area located to the side of the first virtual area V2a, rearward and upwardly.
  • the first virtual area V2a is generated as an area corresponding to the entire first real area Ra of the real space RS and the area of the edge of the second real area Rb on the first real area Ra side (upper left side in Figure 5). Therefore, the shape of the part of the first virtual space V2a excluding the edge (first overlapping area V2d) and the shape of the first real area Ra are the same or similar.
  • the second virtual area V2b is generated as an area corresponding to the entire second real area Rb of the real space RS and the area of the edge of the first real area Ra on the second real area Rb side (the lower right side in Figure 5). Therefore, the shape of the part of the second virtual space V2b excluding the edge (second overlapping area V2e) and the shape of the second real area Rb are the same or similar.
  • the state of the avatar A corresponding to the user U changes in response to changes in the state of the user U in the real space RS. Therefore, when the user U moves from the first real area Ra to the second real area Rb, the avatar A moves from the first virtual area V2a to the second virtual area V2b, regardless of the positional relationship between the first virtual area V2a and the second virtual area V2b.
  • the second virtual area V2b is located to the side of the first virtual area V2a (to the left as viewed from avatar A in the state shown in FIG. 3), shifted rearward and upward.
  • the VR system of the second embodiment that generates such a virtual space VS2 and the method of experiencing a virtual space using it can allow the user U to experience a variety of virtual spaces that would not be possible in real space, just like the VR system S of the first embodiment and the method of experiencing a virtual space using it.
  • the VR system of this embodiment has a similar configuration to the VR system S1 of the first embodiment, except that the shape of the virtual space VS3 generated by the virtual space generation unit is different from the shape of the virtual space VS1 generated by the virtual space generation unit 30a of the VR system S of the first embodiment.
  • the virtual space VS3 is a rectangular parallelepiped space overall, and is composed of two rectangular parallelepiped regions that are arranged so that some of them overlap.
  • the virtual space VS3 is composed of a first virtual area V3a (area separated by a dashed line) which is a rectangular parallelepiped area, and a second virtual area V3b (space separated by a dashed line) which is a rectangular parallelepiped area whose edge (second overlapping area V3e) on one side (the upper left side in FIG. 6, the back side of the drawing) overlaps with the edge (first overlapping area V3d) on the other side (the lower right side in FIG. 6, the front side of the drawing) of the first virtual area V3a.
  • first virtual area V3a area separated by a dashed line
  • V3b space separated by a dashed line
  • the first virtual area V3a is generated as an area corresponding to the entire first real area Ra of the real space RS and the area of the edge of the second real area Rb on the first real area Ra side (upper left side in Figure 6). Therefore, the shape of the part of the first virtual space V3a excluding the edge (first overlapping area V3d) and the shape of the first real area Ra are the same or similar.
  • the second virtual area V3b is generated as an area corresponding to the entire second real area Rb of the real space RS and the area of the edge of the first real area Ra on the second real area Rb side (the lower right side in Figure 6). Therefore, the shape of the part of the second virtual space V3b excluding the edge (second overlap area V3e) and the shape of the second real area Rb are the same or similar.
  • the state of the avatar A corresponding to the user U changes in response to a change in the state of the user U in the real space RS. Therefore, when the user U moves from the first real area Ra to the second real area Rb, the avatar A moves from the first virtual area V3a to the second virtual area V3b.
  • the correspondence between the coordinate axes of the first real area Ra and the coordinate axes of the first virtual area V3a is different from the correspondence between the coordinate axes of the second real area Rb and the coordinate axes of the second virtual area V3b.
  • the virtual axes of the first virtual space V3a are oriented in the same direction as the coordinate axes of the corresponding real space RS
  • the coordinate axes of the second virtual space V3b are oriented in the vertical direction opposite to the coordinate axes of the corresponding real space RS, resulting in an upside-down relationship.
  • first overlapping region V3d which is the edge of the first virtual region V3a
  • second overlapping region V3e which is the edge of the second virtual region V3b
  • the VR system of the second embodiment that generates such a virtual space VS2 and the method of experiencing a virtual space using it can allow the user U to experience a variety of virtual spaces that would not be possible in real space, just like the VR system S of the first embodiment and the method of experiencing a virtual space using it.
  • the coordinate axes of a specified virtual area are inverted in the vertical direction relative to the coordinate axes of real space.
  • changes in the correspondence relationship of the coordinate axes in this invention are not limited to such inversion in the vertical direction. Therefore, for example, the coordinate axes of the virtual area may be turned sideways relative to the coordinate axes of real space, or may be rotated in a specified direction.
  • the coordinate axes of those virtual areas are the same as the coordinate axes of real space.
  • the coordinate axes of one of the virtual areas may be made different from the coordinate axes of real space, as in the third embodiment.
  • the shape of the first virtual space excluding the edge (first overlapping area) and the shape of the first real area are the same or similar
  • the shape of the second virtual space excluding the edge (second overlapping area) and the shape of the second real area are the same or similar.
  • the virtual space experience system and method of the present invention are not limited to such a configuration, and either the first virtual area or the second virtual area does not have to be the same shape or a similar shape to the corresponding real area. However, if configured in this way, it is preferable to make the degree of deformation of the virtual area relative to the real area the same for the two virtual areas, as this is less likely to impede the sense of immersion.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A VR system S generates, in a virtual space VS1, a first virtual region V1a corresponding to a first real region Ra of a real space RS, and a second virtual region V1b corresponding to a second real region Rb, of the real space RS, adjacent to the first real region Ra. The positional relationship between the first virtual region V1a and the second virtual region V1b is different from the positional relationship between the first real region Ra and the second real region Rb.

Description

仮想空間体感システム及び仮想空間体感方法Virtual space experience system and virtual space experience method
 本発明は、ユーザに仮想空間の環境を認識させる環境出力器を介して、そのユーザにその仮想空間を体感させる仮想空間体感システム及び仮想空間体感方法に関する。 The present invention relates to a virtual space experience system and a virtual space experience method that allow a user to experience a virtual space via an environment output device that allows the user to recognize the environment of the virtual space.
 従来、サーバ等で仮想空間を生成し、ヘッドマウントディスプレイ(以下、「HMD」ということがある。)を介して、その仮想空間の画像をユーザに認識させ、ユーザ自身がその仮想空間に存在していると認識させる仮想空間体感システムがある。 Conventionally, there are virtual space experience systems that generate a virtual space on a server or the like, and allow a user to view an image of that virtual space via a head-mounted display (hereinafter sometimes referred to as an "HMD"), making the user perceive that they are present in that virtual space.
 この種の仮想空間体感システムは、モーションキャプチャー装置などによってユーザの現実空間における位置及び動作(すなわち、座標の移動及び姿勢の変化など)を認識し、その認識した位置及び動作に応じて、ユーザに対応するアバターを仮想空間で移動させたり、動作させたりするものがある(例えば、特許文献1参照)。 This type of virtual space experience system uses a motion capture device or the like to recognize the user's position and movement in real space (i.e., coordinate movement and posture change, etc.), and then moves and operates an avatar corresponding to the user in the virtual space according to the recognized position and movement (see, for example, Patent Document 1).
特開2010-257461号公報JP 2010-257461 A
 ところで、特許文献1に記載のような、いわゆる没入型の仮想空間体感システムでは、自らが仮想空間に存在しているとの認識をユーザに強く実感させるためには(すなわち、没入感を高めるためには)、ユーザの現実空間における移動量及び動作量と、そのユーザに対応するアバターの仮想空間における移動量及び動作量との対応関係を一定に維持して、ユーザに、自らの移動及び動作とアバターの移動及び動作との間で、違和感を与えないようにすることが好ましい。 In a so-called immersive virtual space experience system such as that described in Patent Document 1, in order to give the user a strong sense of awareness that he or she is present in the virtual space (i.e., to enhance the sense of immersion), it is preferable to maintain a constant correspondence between the amount of movement and action of the user in real space and the amount of movement and action of the avatar corresponding to that user in the virtual space, so that the user does not feel uncomfortable between his or her own movement and action and the movement and action of the avatar.
 しかし、その対応関係を一定に維持しようとした場合、自ずと仮想空間の形状及び広さを現実空間の形状及び広さに対応したものにせざるを得なくなり、仮想空間の形状及び広さが現実空間の形状及び広さよって制限されてしまうことになる。その結果、例えば、現実空間では不可能な空間的な表現を仮想空間で行おうとしたときに、その仮想空間に対応する現実空間の形状及び広さによって、その表現が制限されてしまうという問題があった。 However, if one were to try to maintain a constant correspondence, one would naturally have to make the shape and size of the virtual space correspond to the shape and size of the real space, and the shape and size of the virtual space would end up being restricted by the shape and size of the real space. As a result, for example, when attempting to express a spatial aspect in a virtual space that is impossible in real space, there was a problem in that the expression would be restricted by the shape and size of the real space that corresponds to that virtual space.
 本発明は以上の点に鑑みてなされたものであり、現実空間の形状及び広さによって仮想空間の形状及び広さが制限されにくく、没入感を維持しやすい仮想空間体感システム及び仮想空間体感方法を提供することを目的とする。 The present invention has been made in consideration of the above points, and aims to provide a system and method for experiencing a virtual space in which the shape and size of the virtual space are not likely to be restricted by the shape and size of the real space, making it easy to maintain a sense of immersion.
 本発明の仮想空間体感システムは、
 ユーザが存在する現実空間に対応する仮想空間を生成する仮想空間生成部と、
 前記ユーザに対応するアバターを前記仮想空間に生成するアバター生成部と、
 前記ユーザの位置及び動作を認識するユーザ状態認識部と、
 前記ユーザの位置及び動作に基づいて、前記アバターの位置及び動作を制御するアバター状態制御部と、
 前記アバターの位置及び動作に基づいて、前記ユーザに認識させる仮想空間における環境を決定する環境決定部とを備え、
 前記仮想空間の環境を出力する環境出力器を介して、前記ユーザに該仮想空間を体感させる仮想空間体感システムにおいて、
 前記仮想空間生成部は、前記仮想空間に、前記現実空間の第1現実領域に対応する第1仮想領域と、該第1現実領域に隣接する前記現実空間の第2現実領域に対応する第2仮想領域とを生成し、
 前記第1仮想領域と前記第2仮想領域との位置関係は、前記第1現実領域と前記第2現実領域との位置関係と、異なっていることを特徴とする。
The virtual space experience system of the present invention comprises:
a virtual space generation unit that generates a virtual space corresponding to a real space in which a user exists;
an avatar generation unit that generates an avatar corresponding to the user in the virtual space;
A user state recognition unit that recognizes the position and action of the user;
an avatar state control unit that controls a position and a movement of the avatar based on a position and a movement of the user;
an environment determination unit that determines an environment in the virtual space to be recognized by the user based on a position and a movement of the avatar;
A virtual space experiencing system that allows a user to experience a virtual space via an environment output device that outputs an environment of the virtual space,
the virtual space generation unit generates, in the virtual space, a first virtual area corresponding to a first real area in the real space and a second virtual area corresponding to a second real area in the real space adjacent to the first real area;
A positional relationship between the first virtual area and the second virtual area is different from a positional relationship between the first real area and the second real area.
 このように、本発明の仮想空間体感システムでは、仮想空間は、現実空間の第1現実領域に対応する第1仮想領域と、その第1現実領域に隣接する現実空間の第2現実領域に対応する第2仮想領域とを含んでいる。すなわち、このシステムでは、現実空間の1つの所定の領域を分割し、その分割された領域の各々に対して、独立した仮想領域を対応させている。そのうえで、このシステムでは、第1仮想領域と第2仮想領域との位置関係を、第1現実領域と第2現実領域との位置関係と、異ならせている。 In this way, in the virtual space experience system of the present invention, the virtual space includes a first virtual area corresponding to a first real area in real space, and a second virtual area corresponding to a second real area in real space adjacent to the first real area. In other words, in this system, one specific area in real space is divided, and an independent virtual area is assigned to each of the divided areas. Furthermore, in this system, the positional relationship between the first virtual area and the second virtual area is made different from the positional relationship between the first real area and the second real area.
 これにより、このシステムによれば、仮想領域の位置(すなわち、配置の仕方)に応じて、仮想空間全体の広さ及び形状を、現実空間の広さ及び形状と異ならせることができる。例えば、2つの仮想領域を上下方向にずらして位置させた場合などには、仮想空間全体の高さを、対応する現実空間全体の高さよりも高くすることができる。ひいては、このシステムによれば、ユーザに対し、現実空間ではあり得ない、種々様々な仮想空間を体感させることができるようになっている。 As a result, this system makes it possible to make the size and shape of the entire virtual space different from the size and shape of the real space depending on the position of the virtual area (i.e., how it is arranged). For example, if two virtual areas are positioned offset in the vertical direction, the height of the entire virtual space can be made higher than the height of the corresponding entire real space. Ultimately, this system makes it possible for the user to experience a wide variety of virtual spaces that would not be possible in real space.
 さらに、このように仮想領域の位置関係を変更することによって、仮想空間の形状及び広さを対応する現実空間の形状及び広さと異ならせた場合には、仮想空間の形状及び広さを変形させることによって、仮想空間の形状及び広さを対応する現実空間の形状及び広さと異ならせた場合などとは異なり、ユーザの移動量及び動作量とアバターの移動量及び動作量との対応関係を、一定のものに維持することができる。 Furthermore, when the shape and size of the virtual space are made different from the shape and size of the corresponding real space by changing the positional relationship of the virtual regions in this way, the correspondence between the amount of movement and action of the user and the amount of movement and action of the avatar can be maintained constant, unlike when the shape and size of the virtual space are made different from the shape and size of the corresponding real space by transforming the shape and size of the virtual space.
 これにより、このシステムによれば、ユーザに、自らの移動及び動作とアバターの移動及び動作との間で、違和感を与えにくくすることができるので、自らが仮想空間に存在しているとの認識(すなわち、没入感)を維持することができる。 As a result, this system makes it possible for the user to feel less uncomfortable between their own movements and actions and those of their avatar, allowing them to maintain the awareness that they are present in a virtual space (i.e., a sense of immersion).
 また、本発明の仮想空間体感システムにおいては、
 前記第1仮想領域と前記第2仮想領域とは、離間し、
 前記第1仮想領域と前記第2仮想領域との間には、前記現実空間に対応していない、又は前記第1現実領域及び前記第2現実領域とは独立した第3現実領域に対応する第3仮想領域が配置されていることが好ましい。
In addition, in the virtual space experience system of the present invention,
The first virtual area and the second virtual area are spaced apart from each other,
It is preferable that a third virtual area is disposed between the first virtual area and the second virtual area, the third virtual area corresponding to a third real area that does not correspond to the real space or is independent of the first real area and the second real area.
 このように、隣接する2つ現実領域に対応する2つの仮想領域を離間して配置するとともに、その間にそれらとは独立した第3仮想領域を配置すると、仮想空間全体の広さを、対応する現実空間の広さよりも広くすることができる。 In this way, by arranging two virtual areas that correspond to two adjacent real areas at a distance from each other and placing a third virtual area between them that is independent of them, it is possible to make the overall size of the virtual space larger than the size of the corresponding real space.
 また、本発明の仮想空間体感システムにおいては、
 前記第1仮想領域の縁部は、前記第2現実領域の縁部に対応していることが好ましい。
In addition, in the virtual space experience system of the present invention,
Preferably, an edge of the first virtual area corresponds to an edge of the second real area.
 例えば、第1仮想領域と第2仮想領域とを離間させて配置するとともに、それらの領域の間に第3仮想領域を配置した場合には、ユーザが第1現実領域から第2現実領域に移動したときに、アバターは、第1仮想領域から第2仮想領域へ、第3仮想領域を飛び越えて移動することになる。 For example, if the first virtual area and the second virtual area are positioned apart from each other and a third virtual area is positioned between them, when the user moves from the first real area to the second real area, the avatar will move from the first virtual area to the second virtual area, jumping over the third virtual area.
 しかし、そのように第3仮想領域を飛び越えるようなアバターの急激な移動が行われる境界部分が示されていない場合、そのアバターの急激な移動によって、ユーザに違和感を与えてしまい、没入感を阻害してしまうおそれがある。 However, if the boundary portion where the avatar makes such a sudden movement across the third virtual area is not indicated, the sudden movement of the avatar may give the user a sense of discomfort and hinder the sense of immersion.
 そこで、このように、第1仮想領域の縁部を、第2現実領域の縁部に対応させるとよい。このように構成すると、ユーザが第2現実領域の縁部に異動した際には、アバターは第1仮想領域の縁部(境界部分の手前側)と第2仮想領域の縁部(境界部分を超えた側)との両方に存在することになる。 In this way, it is advisable to make the edge of the first virtual area correspond to the edge of the second real area. With this configuration, when the user moves to the edge of the second real area, the avatar will be present both on the edge of the first virtual area (the side in front of the boundary) and on the edge of the second virtual area (the side beyond the boundary).
 そして、このように所定の領域においてアバターが重複して表示されると、ユーザは、その領域が境界部分であることとともに、その境界部分を超えた場合にはアバターの急激な移動が生じることを、直感的に理解できるようになる。ひいては、そのアバターの急激な移動が生じた際にも、ユーザに違和感を与えにくくすることができ、没入感を阻害しにくくすることができる。 When avatars are displayed overlapping each other in a specific area in this way, the user can intuitively understand that this area is a boundary, and that the avatar will move suddenly if it crosses this boundary. As a result, even when the avatar moves suddenly, it is less likely to give the user a sense of discomfort, and less likely to disrupt the sense of immersion.
 また、本発明の仮想空間体感システムにおいては、第1仮想領域の縁部を第2現実領域の縁部に対応させている場合には、
 前記第1仮想領域の縁部の色調と前記第1仮想領域の他の部分の色調とは、異なっていることが好ましい。
In addition, in the virtual space experience system of the present invention, when the edge of the first virtual area corresponds to the edge of the second real area,
It is preferable that the color tone of the edge of the first virtual area is different from the color tone of the other part of the first virtual area.
 このように構成すると、アバターがその縁部(すなわち、第1領域の境界部分)に進入する前に、ユーザがその境界部分を容易に認識することができる。ひいては、ユーザに、その境界部分においては何らかの変化が生じることを意識させることができる。これにより、その境界部分にアバターが侵入したことによって、そのアバターの急激な移動が生じた際にも、ユーザに違和感を与えにくくすることができ、没入感を阻害しにくくすることができる。 This configuration allows the user to easily recognize the boundary portion (i.e., the boundary portion of the first area) before the avatar enters the edge. This makes the user aware that some kind of change will occur at the boundary portion. This makes it less likely that the user will feel uncomfortable even when the avatar enters the boundary portion and causes a sudden movement of the avatar, making it less likely that the sense of immersion will be hindered.
 また、本発明の仮想空間体感システムにおいては、第1仮想領域の縁部を第2現実領域の縁部に対応させている場合には、
 前記アバターが前記第1仮想領域の縁部に進入した際に、前記アバターの前記第1仮想領域の縁部に位置している部分は、前記アバターの他の部分とは異なる形態となることが好ましい。
In addition, in the virtual space experience system of the present invention, when the edge of the first virtual area corresponds to the edge of the second real area,
It is preferable that when the avatar enters the edge of the first virtual area, the portion of the avatar located at the edge of the first virtual area has a form different from other portions of the avatar.
 このように構成すると、アバターの一部がその縁部(すなわち、第1領域の境界部分)に進入した際に、ユーザがその境界部分を容易に認識することができる。ひいては、ユーザに、その境界部分においては何らかの変化が生じることを意識させることができる。これにより、その境界部分にアバターが侵入したことによって、そのアバターの急激な移動が生じた際にも、ユーザに違和感を与えにくくすることができ、没入感を阻害しにくくすることができる。 When configured in this manner, when part of the avatar enters the edge (i.e., the boundary portion of the first area), the user can easily recognize the boundary portion. In turn, the user can be made aware that some kind of change will occur at the boundary portion. This makes it less likely that the user will feel uncomfortable even when the avatar enters the boundary portion and causes a sudden movement of the avatar, making it less likely that the sense of immersion will be hindered.
 また、本発明の仮想空間体感システムにおいては、
 前記第1現実領域の座標軸と前記第1仮想領域の座標軸との対応関係は、前記第2現実領域の座標軸と前記第2仮想領域の座標軸との対応関係と、異なっていてもよい。
In addition, in the virtual space experience system of the present invention,
A correspondence relationship between the coordinate axes of the first real area and the coordinate axes of the first virtual area may be different from a correspondence relationship between the coordinate axes of the second real area and the coordinate axes of the second virtual area.
 このように、現実領域と仮想領域における座標軸の対応関係を、領域ごとに異ならせると、仮想空間を、現実空間ではあり得ない空間として構成することができる。例えば、第1仮想領域の座標軸を、第1現実領域の座標軸と一致させ、第2仮想領域の座標軸を、第2現実領域の座標軸とは上下逆転させた座標軸とした場合には、第1仮想領域から第2仮想領域に移動することによって、天地が逆転するといった仮想空間を実現することができる。 In this way, by making the correspondence between the coordinate axes in the real and virtual realms different for each realm, it is possible to configure the virtual space as a space that does not exist in real space. For example, if the coordinate axes of the first virtual realm are made to coincide with those of the first real realm and the coordinate axes of the second virtual realm are made to be upside down compared to those of the second real realm, then it is possible to realize a virtual space in which the top and bottom are reversed by moving from the first virtual realm to the second virtual realm.
 本発明の仮想空間体感方法は、
 仮想空間生成部が、ユーザが存在する現実空間に対応する仮想空間を生成するステップと、
 アバター生成部が、前記ユーザに対応するアバターを前記仮想空間に生成するステップと、
 ユーザ状態認識部が、前記ユーザの位置及び動作を認識するステップと、
 アバター状態制御部が、前記ユーザの位置及び動作に基づいて、前記アバターの位置及び動作を制御するステップと、
 環境決定部が、前記アバターの位置及び動作に基づいて、前記ユーザに認識させる仮想空間における環境を決定するステップと、
 環境出力器が、前記ユーザに前記仮想空間の環境を出力して体感させるステップとを備えている仮想空間体感方法において、
 前記仮想空間生成部は、前記仮想空間に、前記現実空間の第1現実領域に対応する第1仮想領域と、該第1現実領域に隣接する前記現実空間の第2現実領域に対応する第2仮想領域とを生成し、
 前記第1仮想領域と前記第2仮想領域との位置関係は、前記第1現実領域と前記第2現実領域との位置関係と、異なっていることを特徴とする。
The virtual space experiencing method of the present invention comprises:
A step in which a virtual space generation unit generates a virtual space corresponding to a real space in which a user exists;
an avatar generation unit generating an avatar corresponding to the user in the virtual space;
A user state recognition unit recognizes a position and a movement of the user;
an avatar state control unit controlling a position and a movement of the avatar based on a position and a movement of the user;
an environment determination unit determining an environment in a virtual space to be recognized by the user based on a position and a movement of the avatar;
a step of outputting an environment of the virtual space to the user by an environment output device,
the virtual space generation unit generates, in the virtual space, a first virtual area corresponding to a first real area in the real space and a second virtual area corresponding to a second real area in the real space adjacent to the first real area;
A positional relationship between the first virtual area and the second virtual area is different from a positional relationship between the first real area and the second real area.
第1実施形態に係るVRシステムの概略構成を示す模式図。FIG. 1 is a schematic diagram showing a general configuration of a VR system according to a first embodiment. 図1のVRシステム構成を示すブロック図。FIG. 2 is a block diagram showing the configuration of the VR system shown in FIG. 1 . 図1のVRシステムの使用時における現実空間及び仮想空間の状態を示す斜視図。FIG. 2 is a perspective view showing the state of real space and virtual space when the VR system of FIG. 1 is used. 図1のVRシステムが実行する処理を示すフローチャート。4 is a flowchart showing a process executed by the VR system of FIG. 1 . 第2実施形態のVRシステムの使用時における現実空間及び仮想空間の状態を示す斜視図。13 is a perspective view showing the state of real space and virtual space when using the VR system of the second embodiment. FIG. 第3実施形態のVRシステムの使用時における現実空間及び仮想空間の状態を示す斜視図。FIG. 13 is a perspective view showing the state of real space and virtual space when using the VR system of the third embodiment.
[第1実施形態]
 以下、図1~図4を参照して、第1実施形態に係るVRシステムS(仮想空間体感システム)、及びそのVRシステムSが実行する処理(仮想空間体感方法)について説明する。
[First embodiment]
Hereinafter, a VR system S (virtual space experiencing system) according to a first embodiment and a process (virtual space experiencing method) executed by the VR system S will be described with reference to FIGS.
 VRシステムSは、現実空間RS(例えば、1つの部屋など)に存在するユーザUに対し、その現実空間RSに対応する仮想空間VS1の環境(例えば、画像、音など)を認識させるとともに、そのユーザUに対応するアバターAを仮想空間VS1でユーザUに対応するように移動又は動作させることによって、自らがその仮想空間VS1に存在すると認識させるものである(図3等参照)。 The VR system S allows a user U present in a real space RS (e.g., a room) to recognize the environment (e.g., images, sounds, etc.) of a virtual space VS1 that corresponds to the real space RS, and also makes the user U recognize that he or she exists in the virtual space VS1 by moving or moving an avatar A corresponding to the user U in the virtual space VS1 to correspond to the user U (see Figure 3, etc.).
 なお、本実施形態では、理解を容易にするために、ユーザは1人としている。しかし、本発明の仮想空間体感システムは、そのような構成に限定されるものではなく、ユーザの数は2人以上であってもよい。 In this embodiment, for ease of understanding, there is one user. However, the virtual space experience system of the present invention is not limited to such a configuration, and the number of users may be two or more.
[システムの概略構成]
 まず、図1を参照して、VRシステムSの概略構成について説明する。
[System Overview]
First, the schematic configuration of the VR system S will be described with reference to FIG.
 図1に示すように、VRシステムSは、現実空間RSに存在するユーザUに取り付けられる複数の標識1と、ユーザU(厳密には、ユーザUに取り付けられた標識1)を撮影するカメラ2と、仮想空間VS1(図3参照)の画像及び音を決定するサーバ3と、決定された画像及び音をユーザに認識させるヘッドマウントディスプレイ(以下、「HMD4」という。)とを備えている。 As shown in FIG. 1, the VR system S includes a number of signs 1 that are attached to a user U in a real space RS, a camera 2 that photographs the user U (or, more precisely, the signs 1 attached to the user U), a server 3 that determines images and sounds in a virtual space VS1 (see FIG. 3), and a head-mounted display (hereinafter referred to as "HMD 4") that allows the user to recognize the determined images and sounds.
 VRシステムSでは、カメラ2、サーバ3及びHMD4は、インターネット網、公衆回線、近距離無線通信などを介して、無線で相互に情報を送受信可能となっている。ただし、それらのいずれか同士を有線で相互に情報を送受信可能に構成してもよい。 In the VR system S, the camera 2, server 3, and HMD 4 can wirelessly transmit and receive information between each other via the Internet network, public lines, short-distance wireless communication, etc. However, any of them may also be configured to transmit and receive information between each other via wires.
 複数の標識1は、ユーザUの装着するHMD4、手袋及び靴を介して、ユーザUの頭部、両手及び両足のそれぞれに取り付けられている。なお、複数の標識1は、後述するようにユーザUの現実空間RSにおける移動量及び動作量を認識するために用いられるものである。そのため、VRシステムSを構成する他の機器に応じて、標識1を取り付ける位置、取り付ける数などは適宜変更してよい。 The multiple signs 1 are attached to the user U's head, both hands, and both feet via the HMD 4, gloves, and shoes worn by the user U. The multiple signs 1 are used to recognize the amount of movement and action of the user U in the real space RS, as described below. Therefore, the positions at which the signs 1 are attached, the number of signs 1 attached, etc. may be changed as appropriate depending on the other devices that make up the VR system S.
 カメラ2は、ユーザUの存在する現実空間RSのユーザUが動作可能範囲(すなわち、ユーザUが移動可能、且つ、動作可能な範囲)を多方向から撮影可能なように設置されている。 Camera 2 is installed so that it can capture the user U's range of motion (i.e., the range within which user U can move and act) in the real space RS in which user U exists from multiple directions.
 サーバ3は、カメラ2が撮影した画像から標識1を認識し、その認識された標識1の現実空間RSにおける位置に基づいて、ユーザUの座標及び姿勢(ひいては、移動量及び動作量)を認識する。また、サーバ3は、その座標及び姿勢に基づいて、ユーザUに認識させる仮想空間VS1の環境(例えば、画像及び音など)を決定する。 The server 3 recognizes the sign 1 from the image captured by the camera 2, and recognizes the coordinates and posture (and thus the amount of movement and action) of the user U based on the position of the recognized sign 1 in the real space RS. The server 3 also determines the environment of the virtual space VS1 (e.g., images, sounds, etc.) that the user U will recognize based on the coordinates and posture.
 HMD4は、ユーザに仮想空間VS1の環境を出力して認識させる環境出力器である。HMD4は、ユーザUの頭部に装着される。HMD4は、ユーザUに、サーバ3によって決定された仮想空間VS1の画像をユーザUの認識させるためのモニタ40と、サーバ3によって決定された仮想空間VS1の音をユーザUに認識させるためのスピーカ41とを有している(図2参照)。 The HMD4 is an environment output device that outputs the environment of the virtual space VS1 to the user so that the user can recognize it. The HMD4 is worn on the head of the user U. The HMD4 has a monitor 40 that allows the user U to recognize an image of the virtual space VS1 determined by the server 3, and a speaker 41 that allows the user U to recognize the sound of the virtual space VS1 determined by the server 3 (see Figure 2).
 VRシステムSを用いて仮想空間VS1を体感する場合、ユーザUは、HMD4を介して、仮想空間VS1の画像と音のみを認識させられて、自らが仮想空間VS1に存在していると認識させられる。すなわち、VRシステムSは、いわゆる没入型のシステムとして構成されている。 When experiencing the virtual space VS1 using the VR system S, the user U is made to perceive only the images and sounds of the virtual space VS1 via the HMD 4, and is made to perceive that he or she is present in the virtual space VS1. In other words, the VR system S is configured as a so-called immersive system.
 なお、本発明の仮想空間体感システムは、そのような構成に限定されるものではない。例えば、モーションキャプチャー装置を使用する場合には、上記の構成のものの他、標識及びカメラの数及び配置が上記構成とは異なるもの(例えば、それぞれ1つずつ設けられているものなど)を用いてもよい。また、標識を用いずに、ユーザの画像そのものから特徴点を認識して、ユーザの姿勢及び座標を認識するようにしてもよい。 The virtual space experience system of the present invention is not limited to such a configuration. For example, when using a motion capture device, in addition to the above configuration, a configuration in which the number and arrangement of signs and cameras are different from the above configuration (for example, one of each) may be used. Also, without using signs, feature points may be recognized from the image of the user itself to recognize the user's posture and coordinates.
 また、例えば、モーションキャプチャー装置に代わり、他の装置を用いてユーザの状態を認識するようにしてもよい。具体的には、例えば、HMDにGPSなどのセンサを搭載し、そのセンサからの出力に基づいて、ユーザの座標、姿勢などを認識するようにしてもよい。また、そのようなセンサと、上記のようなモーションキャプチャー装置とを併用してもよい。 Furthermore, for example, instead of a motion capture device, another device may be used to recognize the user's state. Specifically, for example, a sensor such as a GPS may be mounted on the HMD, and the user's coordinates, posture, etc. may be recognized based on the output from the sensor. Furthermore, such a sensor may be used in combination with the motion capture device described above.
[処理部の構成]
 次に、図2を用いて、サーバ3の備えている処理部の構成を詳細に説明する。
[Configuration of Processing Unit]
Next, the configuration of the processing unit of the server 3 will be described in detail with reference to FIG.
 サーバ3は、CPU、RAM、ROM、インターフェース回路等を含む1つ又は複数の電子回路ユニットにより構成されている。図2に示すように、そのサーバ3は、実装されたハードウェア構成又はプログラムにより実現される機能(処理部)として、仮想環境生成部30と、ユーザ状態認識部31と、アバター状態制御部32と、環境決定部33とを備えている。 The server 3 is composed of one or more electronic circuit units including a CPU, RAM, ROM, interface circuits, etc. As shown in FIG. 2, the server 3 has a virtual environment generation unit 30, a user state recognition unit 31, an avatar state control unit 32, and an environment determination unit 33 as functions (processing units) realized by the implemented hardware configuration or programs.
 仮想環境生成部30は、仮想空間生成部30aと、アバター生成部30bとを有している。 The virtual environment generation unit 30 has a virtual space generation unit 30a and an avatar generation unit 30b.
 仮想空間生成部30aは、ユーザUが存在する現実空間RSに対応する仮想空間VS1を生成する。具体的には、仮想空間生成部30aは、仮想空間VSの背景及び仮想空間VSに存在するオブジェクトとなる画像、及び、それらの画像に関連する音を生成する。 The virtual space generation unit 30a generates a virtual space VS1 that corresponds to the real space RS in which the user U exists. Specifically, the virtual space generation unit 30a generates images that represent the background of the virtual space VS and objects that exist in the virtual space VS, as well as sounds associated with these images.
 なお、本実施形態のVRシステムSは備えていないが、仮想空間体感システムが、所定の感触を実現する構成(例えば、硬さの変わるクッション等)、所定の匂いを生成する構成を備えている場合には、仮想空間生成部は、画像及び音に加えて、それらの感触、匂いを用いて、仮想空間を生成するようにしてもよい。 Note that, although the VR system S of this embodiment does not have such a feature, if the virtual space experience system has a feature for realizing a specific sensation (such as a cushion with variable hardness) or a feature for generating a specific smell, the virtual space generation unit may generate the virtual space using the sensation and smell in addition to the images and sounds.
 アバター生成部30bは、仮想空間VS1に、ユーザUに対応するアバターAを生成する(図3参照)。アバターAの仮想空間VS1における状態は、対応するユーザUの現実空間RSにおける状態の変化に対応して変化する。なお、アバターAは、ユーザU1人に対して、その一部又は全部が複数生成されることがある。 The avatar generation unit 30b generates an avatar A corresponding to the user U in the virtual space VS1 (see Figure 3). The state of the avatar A in the virtual space VS1 changes in response to changes in the state of the corresponding user U in the real space RS. Note that multiple avatars A, either in part or in whole, may be generated for each user U.
 ユーザ状態認識部31は、カメラ2が撮影した標識1を含むユーザUの画像データを認識し、その画像データに基づいて、ユーザUの現実空間RSにおける状態(位置及び動作。すなわち、座標の移動、姿勢の変化など。)を認識する。ユーザ状態認識部31は、ユーザ姿勢認識部31aと、ユーザ座標認識部31bとを有している。 The user state recognition unit 31 recognizes image data of the user U, including the sign 1 captured by the camera 2, and recognizes the state of the user U in the real space RS (position and movement, i.e., movement of coordinates, change of posture, etc.) based on the image data. The user state recognition unit 31 has a user posture recognition unit 31a and a user coordinate recognition unit 31b.
 ユーザ姿勢認識部31aは、認識されたユーザUの画像データから標識1を抽出し、その抽出結果に基づいて、ユーザUの身体の各部における向きを含む姿勢を認識する。 The user posture recognition unit 31a extracts a sign 1 from the recognized image data of the user U, and recognizes the posture of the user U, including the orientation of each part of the body, based on the extraction result.
 ユーザ座標認識部31bは、認識されたユーザUの画像データから標識1を抽出し、その抽出結果に基づいて、ユーザUの座標を認識する。 The user coordinate recognition unit 31b extracts sign 1 from the image data of the recognized user U, and recognizes the coordinates of user U based on the extraction results.
 アバター状態制御部32は、ユーザ姿勢認識部31aによって認識されたユーザUの現実空間RSにおける姿勢、及びユーザ座標認識部31bによって認識されたユーザUの現実空間RSにおける座標に基づいて、そのユーザUに対応するアバターAの仮想空間VS1における状態(例えば、座標の移動、姿勢の変化など)を制御する。 The avatar state control unit 32 controls the state (e.g., movement of coordinates, change of posture, etc.) of the avatar A corresponding to the user U in the virtual space VS1 based on the posture of the user U in the real space RS recognized by the user posture recognition unit 31a and the coordinates of the user U in the real space RS recognized by the user coordinate recognition unit 31b.
 環境決定部33は、アバターAの状態(例えば、その時点における座標、姿勢など)に基づいて、仮想空間VS1におけるアバターAの環境を決定する。 The environment determination unit 33 determines the environment of avatar A in the virtual space VS1 based on the state of avatar A (e.g., coordinates, posture, etc. at that time).
 ここで、「アバターの環境」とは、仮想空間において、アバターに対して影響を与えるものを指す。例えば、「アバターの環境」とは、アバターの状態に対する、仮想空間に存在するオブジェクトの状態(例えば、位置、姿勢など)などである。 Here, "avatar environment" refers to things that affect the avatar in the virtual space. For example, the "avatar environment" refers to the state of objects in the virtual space relative to the state of the avatar (e.g., position, posture, etc.).
 また、環境決定部33は、決定されたアバターAの環境に基づいて、そのアバターAに対応するユーザUに、HMD4のモニタ40及びスピーカ41を介して認識させる仮想空間VS1の環境(画像及び音)を決定する。 In addition, based on the determined environment of avatar A, the environment determination unit 33 determines the environment (images and sounds) of the virtual space VS1 that the user U corresponding to that avatar A will recognize through the monitor 40 and speaker 41 of the HMD 4.
 ここで、「ユーザに認識させる環境」とは、そのユーザに五感によって認識させる仮想空間の環境を指す。例えば、「ユーザに認識させる環境」とは、そのユーザに対応するアバターの周囲における仮想空間の画像、音などである。 Here, "the environment that the user is made to perceive" refers to the environment in the virtual space that the user is made to perceive through the five senses. For example, "the environment that the user is made to perceive" refers to the images and sounds of the virtual space around the avatar that corresponds to the user.
 また、ここで、「仮想空間の画像」には、仮想空間の背景の画像の他、他のアバターの画像、仮想空間にのみ存在するオブジェクトの画像、現実空間に対応して仮想空間に存在するオブジェクトの画像などが含まれる。 In addition, "virtual space images" here include images of the background of the virtual space, as well as images of other avatars, images of objects that exist only in the virtual space, and images of objects that exist in the virtual space that correspond to the real world.
 なお、本発明の仮想空間体感システムを構成する各処理部は、上記のような構成に限定されるものではない。 Note that each processing unit constituting the virtual space experience system of the present invention is not limited to the configuration described above.
 例えば、本実施形態においてサーバ3に設けられている処理部の一部を、HMD4に設けてもよい。また、複数のサーバを用いて構成してもよいし、サーバを省略してHMDに搭載されているCPUを協働させて構成してもよい。また、HMDに搭載されているスピーカ以外のスピーカを設けてもよい。また、視覚及び聴覚へ影響を与えるデバイスの他、仮想空間に応じた匂い、風などを生じさせるような嗅覚及び触覚へ影響を与えるデバイスを含めてもよい。 For example, part of the processing unit provided in the server 3 in this embodiment may be provided in the HMD 4. Also, it may be configured using multiple servers, or the server may be omitted and the CPU mounted on the HMD may work in cooperation. Also, speakers other than those mounted on the HMD may be provided. In addition to devices that affect the senses of sight and hearing, devices that affect the senses of smell and touch, such as producing smells, wind, etc. that correspond to the virtual space, may also be included.
[生成される仮想空間]
 ここで、図3を参照して、本実施形態のVRシステムSの仮想空間生成部30aによって生成される仮想空間VS1について説明する。
[Generated virtual space]
Here, with reference to FIG. 3, the virtual space VS1 generated by the virtual space generating unit 30a of the VR system S of this embodiment will be described.
 図3に示すように、仮想空間VS1は、全体として直方体状の空間として構成されている。 As shown in Figure 3, the virtual space VS1 is configured as a rectangular parallelepiped space overall.
 具体的には、仮想空間VS1は、全体の一方側の端部に位置している直方体状の領域である第1仮想領域V1a(一点鎖線で区切られている領域)と、全体の他方側の端部であって、第1仮想領域V1aと離間した位置に位置している直方体状の領域である第2仮想領域V1b(二点鎖線で区切られている領域)と、第1仮想領域VS1aと第2仮想領域V1bとの間に位置している直方体状の領域である第3仮想領域V1c(破線で区切られている領域)とによって構成されている。 Specifically, the virtual space VS1 is composed of a first virtual area V1a (area bounded by a dashed line) which is a rectangular parallelepiped area located at one end of the entire space, a second virtual area V1b (area bounded by a dashed line) which is a rectangular parallelepiped area located at the other end of the entire space and at a position separated from the first virtual area V1a, and a third virtual area V1c (area bounded by a dashed line) which is a rectangular parallelepiped area located between the first virtual area VS1a and the second virtual area V1b.
 第1仮想領域V1aは、現実空間RSの第1現実領域Ra全体と、第2現実領域Rbの第1現実領域Ra側の縁部(図3においては左上側)の領域に対応する領域として生成される。そのため、第1仮想空間V1aの縁部(第1重複領域V1d)を除く部分の形状と第1現実領域Raの形状とは、同一形状又は相似形状となっている。 The first virtual area V1a is generated as an area corresponding to the entire first real area Ra of the real space RS and the area of the edge of the second real area Rb on the first real area Ra side (upper left side in Figure 3). Therefore, the shape of the part of the first virtual space V1a excluding the edge (first overlapping area V1d) and the shape of the first real area Ra are the same or similar.
 第2仮想領域V1bは、現実空間RSの第2現実領域Rb全体と、第1現実領域Raの第2現実領域Rb側の縁部(図3においては右下側)の領域に対応する領域として生成される。そのため、第2仮想空間V1bの縁部(第2重複領域V1e)を除く部分の形状と第2現実領域Rbの形状とは、同一形状又は相似形状となっている。 The second virtual area V1b is generated as an area corresponding to the entire second real area Rb of the real space RS and the area of the edge of the first real area Ra on the second real area Rb side (the lower right side in Figure 3). Therefore, the shape of the part of the second virtual space V1b excluding the edge (second overlap area V1e) and the shape of the second real area Rb are the same or similar.
 第3仮想領域V1cは、現実空間RSのいずれの領域にも対応していない領域として生成される。仮想空間VS1は、このよう第3仮想空間V1cを含んでいるので、その仮想空間VS1全体の広さは、対応する現実空間RSの広さよりも広くなっている。なお、第3仮想領域V1cは、現実空間RSとは独立した現実空間における領域(第3現実領域)に対応する領域として生成されていてもよい。 The third virtual area V1c is generated as an area that does not correspond to any area in the real space RS. Because the virtual space VS1 includes the third virtual space V1c, the overall size of the virtual space VS1 is larger than the size of the corresponding real space RS. Note that the third virtual area V1c may also be generated as an area that corresponds to an area in a real space that is independent of the real space RS (a third real area).
 これらの仮想領域のうち、現実空間RSの第1現実領域Ra及び第2現実領域Rbに対応する第1仮想領域V1a及び第2仮想領域V1bでは、現実空間RSにおけるユーザUの状態の変化に応じて、そのユーザUに対応するアバターAの状態も変化する。 Of these virtual areas, in the first virtual area V1a and the second virtual area V1b corresponding to the first real area Ra and the second real area Rb of the real space RS, the state of the avatar A corresponding to the user U also changes in response to a change in the state of the user U in the real space RS.
 このように、第1仮想領域V1aと第2仮想領域V1bとを離間させて配置するとともに、それらの領域の間に第3仮想領域V1cを配置した場合には、ユーザUが第1現実領域Raから第2現実領域Rbに移動したときに、アバターAは、第1仮想領域V1aから第2仮想領域V1bへ、第3仮想領域V1cを飛び越えて移動することになる。 In this way, if the first virtual area V1a and the second virtual area V1b are positioned at a distance from each other and the third virtual area V1c is positioned between these areas, when the user U moves from the first real area Ra to the second real area Rb, the avatar A will move from the first virtual area V1a to the second virtual area V1b, jumping over the third virtual area V1c.
 そのため、例えば、仮想空間VS1をユーザUがアバターAを介して体感することによって、第3仮想領域V1cに配置されている仮想のオブジェクトOを観察する場合、アバターAが第1仮想領域V1aに位置しているときには(すなわち、ユーザUが第1現実領域Raに位置しているときには)、ユーザUは、オブジェクトOを、一方側(図3においては左上側である図面奥側)から観察することができる。 Therefore, for example, when a user U experiences the virtual space VS1 through avatar A and observes a virtual object O located in the third virtual area V1c, when avatar A is located in the first virtual area V1a (i.e., when user U is located in the first real area Ra), user U can observe object O from one side (the back side of the drawing, which is the upper left side in Figure 3).
 そして、ユーザUがそのオブジェクトOの他方側(図3においては右下側である図面手前側)から観察したいと考えたときには、ユーザUは、第1現実領域Raから第2現実領域Rbへ移動するだけで、オブジェクトOの存在している第3仮想領域V1cを飛び越えて第2仮想領域V1bへ移動できるので、その移動後に後ろを振り返るだけで、オブジェクトOを他方側から観察することができるようになる。 If the user U wishes to observe the object O from the other side (the lower right side in Figure 3, that is, the front side of the drawing), the user U can move from the first real area Ra to the second real area Rb, jumping over the third virtual area V1c in which the object O exists, and move to the second virtual area V1b, and then by simply looking back after that movement, the user U can observe the object O from the other side.
 このように、VRシステムSでは、現実空間RSの1つの所定の領域を隣接する2つの領域(第1現実領域Ra及び第2現実領域Rb)に分割し、その分割された領域の各々に対して、独立した仮想領域(第1仮想領域V1a及び第2仮想領域V1b)を対応させている。 In this way, in the VR system S, one specific area in the real space RS is divided into two adjacent areas (a first real area Ra and a second real area Rb), and an independent virtual area (a first virtual area V1a and a second virtual area V1b) is assigned to each of the divided areas.
 そのうえで、このVRシステムSでは、第1仮想領域V1aと第2仮想領域V1bとの位置関係(すなわち、仮想空間VS1の形状及び広さ)を、第1現実領域Raと第2現実領域Rbとの位置関係(すなわち、現実空間RSの形状及び広さ)と、異ならせている。 In addition, in this VR system S, the positional relationship between the first virtual area V1a and the second virtual area V1b (i.e., the shape and size of the virtual space VS1) is made different from the positional relationship between the first real area Ra and the second real area Rb (i.e., the shape and size of the real space RS).
 これにより、このVRシステムSによれば、ユーザUに対し、現実空間ではあり得ない、種々様々な仮想空間を体感させることができるようになっている。 As a result, this VR system S allows the user U to experience a variety of virtual spaces that would not be possible in the real world.
 さらに、このように仮想領域の位置関係を変更することによって、仮想空間VS1の形状及び広さを対応する現実空間RSの形状及び広さと異ならせた場合には、仮想空間の形状及び広さを変形させることによって、仮想空間の形状及び広さを対応する現実空間の形状及び広さと異ならせた場合などとは異なり、ユーザUの移動量及び動作量とアバターAの移動量及び動作量との対応関係を、一定のものに維持することができる。 Furthermore, by changing the positional relationship of the virtual regions in this manner, when the shape and size of the virtual space VS1 is made different from the shape and size of the corresponding real space RS, the correspondence between the amount of movement and action of the user U and the amount of movement and action of the avatar A can be maintained constant, unlike when the shape and size of the virtual space is made different from the shape and size of the corresponding real space by deforming the shape and size of the virtual space.
 これにより、このVRシステムSによれば、ユーザUに、自らの移動及び動作とアバターAの移動及び動作との間で、違和感を与えにくくすることができるので、自らが仮想空間に存在しているとの認識(すなわち、没入感)を維持することができる。 As a result, this VR system S makes it possible to reduce the sense of incongruity felt by the user U between his or her own movements and actions and those of avatar A, allowing the user U to maintain the awareness that he or she is present in a virtual space (i.e., a sense of immersion).
 ところで、このような第3仮想領域V1cを飛び越えるような急激な移動がアバターAに生じる場合、その急激な移動によって、ユーザUに違和感を与えてしまい、没入感を阻害してしまうおそれがある。 However, if Avatar A were to suddenly move beyond the third virtual area V1c, the sudden movement could give the user U a strange feeling and hinder the sense of immersion.
 そこで、VRシステムSでは、前述のように、第1仮想領域V1aは、第1現実領域Raだけではなく、第2現実領域Rbの第1現実領域Ra側の縁部にも対応する領域として構成されている。具体的には、第1仮想領域V1aの第2仮想領域V1b側の領域(図3において床面が斜線によってハッチングされている領域。以下、「第1重複領域V1d」という。)は、第2現実領域Rbの第1現実領域Ra側の縁部に対応している。 As described above, in the VR system S, the first virtual area V1a is configured as an area that corresponds not only to the first real area Ra but also to the edge of the second real area Rb on the first real area Ra side. Specifically, the area on the second virtual area V1b side of the first virtual area V1a (the area where the floor surface is hatched with diagonal lines in Figure 3; hereinafter referred to as the "first overlapping area V1d") corresponds to the edge of the second real area Rb on the first real area Ra side.
 また、第2仮想領域V1bは、第2現実領域Rbだけではなく、第1現実領域Raの第2現実領域Rb側の縁部にも対応する領域として構成されている。具体的には、第2仮想領域V1bの第1仮想領域V1a側の領域(図3において床面が斜線によってハッチングされている領域。以下、「第2重複領域V1e」という。)は、第1現実領域Raの第2現実領域Rb側の縁部に対応している。 The second virtual area V1b is configured as an area that corresponds not only to the second real area Rb but also to the edge of the first real area Ra on the second real area Rb side. Specifically, the area on the first virtual area V1a side of the second virtual area V1b (the area where the floor surface is hatched with diagonal lines in FIG. 3; hereinafter referred to as the "second overlapping area V1e") corresponds to the edge of the first real area Ra on the second real area Rb side.
 そのため、図3に示すように、ユーザUが第1現実領域Raから第2現実領域Rbの縁部に進入した際には、ユーザUの身体のうち第2現実領域Rbの縁部に進入した部分に対応するアバターAの部分(図3では、アバターAの前側の半身)は、第1重複領域V1dと第2仮想領域V1bとの両方に存在することになる。 Therefore, as shown in FIG. 3, when user U enters the edge of the second real world Rb from the first real world Ra, the part of avatar A that corresponds to the part of user U's body that has entered the edge of the second real world Rb (in FIG. 3, the front half of avatar A) will be present in both the first overlapping area V1d and the second virtual area V1b.
 また、同時に、ユーザUの身体のうち第1現実領域Raに残っている部分に対応するアバターAの部分(図3では、アバターAの後側の半身)は、第2重複領域V1eと第1仮想領域V1aとの両方に存在することになる。 At the same time, the part of avatar A that corresponds to the part of user U's body that remains in the first real area Ra (in Figure 3, the rear half of avatar A) will be present in both the second overlap area V1e and the first virtual area V1a.
 このように、所定の領域においてアバターAが重複して表示されると、ユーザUは、その領域が境界部分であることともに、その境界部分を超えた場合には第3仮想領域V1cを飛び越えて移動することを、直感的に理解できるようになる。ひいては、そのアバターAの急激な移動が生じた際にも、ユーザUに違和感を与えにくくすることができ、没入感を阻害しにくくすることができる。 In this way, when avatar A is displayed overlapping in a specific area, user U can intuitively understand that this area is a boundary area, and that if avatar A moves beyond this boundary area, he or she will move across the third virtual area V1c. As a result, even if avatar A moves suddenly, it is possible to make it less likely that the user U will feel uncomfortable, and to make it less likely that the sense of immersion will be hindered.
 なお、本発明の仮想空間体感システムは、このような構成に限定されるものではなく、必ずしも、仮想空間にこのような重複領域を設けなくてもよい。そのため、例えば、本実施形態における第1重複領域V1d及び第2重複領域V1eのいずれか一方のみを生成するようにしてもよいし、いずれの重複領域も生成しなくてもよい。 Note that the virtual space experience system of the present invention is not limited to this configuration, and it is not necessary to provide such an overlapping area in the virtual space. Therefore, for example, only one of the first overlapping area V1d and the second overlapping area V1e in this embodiment may be generated, or neither overlapping area may be generated.
 また、VRシステムSでは、第1仮想領域V1aの第1重複領域V1dの床面の色調と、第1仮想領域V1aの他の部分の床面の色調とを異ならせている。同様に、第2仮想領域V1bの第2重複領域V1eの床面の色調と、第2仮想領域V1bの他の部分の床面の色調とを異ならせている。 Furthermore, in the VR system S, the color tone of the floor surface in the first overlapping area V1d of the first virtual area V1a is made different from the color tone of the floor surface in other parts of the first virtual area V1a. Similarly, the color tone of the floor surface in the second overlapping area V1e of the second virtual area V1b is made different from the color tone of the floor surface in other parts of the second virtual area V1b.
 これは、アバターAがそれらの重複領域に進入する前に、ユーザUがその重複領域を容易に認識させるためである。ひいては、ユーザに、その境界部分においては何らかの変化が生じることを意識させるためである。 This is to allow user U to easily recognize the overlapping area before avatar A enters that area. In turn, this is to make the user aware that some kind of change will occur at the boundary.
 なお、本発明の仮想空間体感システムは、このような構成に限定されるものではなく、必ずしも、重複領域の床面の色調を、他の領域の床面の色調と異ならせなくてもよい。そのため、いずれか一方の重複領域の色調のみを、他の領域の色調と異ならせてもよい。また、床面だけでなく、重複領域全体の色調を、他の領域全体の色調と異ならせてもよい。また、重複領域の色調を、他の領域の色調と異ならせなくてもよい。 Note that the virtual space experience system of the present invention is not limited to this configuration, and the color tone of the floor surface in the overlapping area does not necessarily have to be different from the color tone of the floor surface in the other area. Therefore, the color tone of only one of the overlapping areas may be different from the color tone of the other area. Furthermore, the color tone of not only the floor surface but also the entire overlapping area may be different from the color tone of the entire other area. Furthermore, the color tone of the overlapping area does not have to be different from the color tone of the other area.
 また、VRシステムSでは、第1重複領域V1d及び第2重複領域V1eにおいて、アバターAが半透明になるように構成されている。 Furthermore, the VR system S is configured so that avatar A is semi-transparent in the first overlapping area V1d and the second overlapping area V1e.
 これは、重複領域におけるアバターAの形態を変化させることによって、アバターAの一部がその重複領域に進入した際に、ユーザUがその重複領域を容易に認識させるためである。ひいては、ユーザに、その境界部分においては何らかの変化が生じることを意識させるためである。 This is to allow the user U to easily recognize the overlapping area when part of avatar A enters the overlapping area by changing the form of avatar A in the overlapping area. Ultimately, this is to make the user aware that some kind of change will occur at the boundary area.
 なお、本発明の仮想空間体感システムは、このような構成に限定されるものではなく、必ずしも、重複領域においてアバターを半透明にしなくてもよい。そのため、アバターの色、形状等を異ならせるようにしてもよい。また、いずれか一方の重複領域においてのみ、アバターの形態を異ならせてもよい。また、重複領域において、アバターの形態を異ならせなくてもよい。 Note that the virtual space experience system of the present invention is not limited to this configuration, and the avatars do not necessarily have to be semi-transparent in the overlapping area. Therefore, the color, shape, etc. of the avatars may be made different. Furthermore, the form of the avatars may be made different only in one of the overlapping areas. Furthermore, the form of the avatars does not have to be made different in the overlapping areas.
[実行される処理]
 次に、図2~図4を参照して、VRシステムSを用いてユーザUに仮想空間VS1を体感させる際に、VRシステムSの実行する処理(すなわち、仮想空間体感方法)について説明する。
[Processing to be performed]
Next, with reference to Figures 2 to 4, a process (i.e., a virtual space experiencing method) executed by the VR system S when allowing the user U to experience the virtual space VS1 using the VR system S will be described.
 この処理においては、まず、サーバ3の仮想環境生成部30は、仮想空間VS1及びアバターAを生成する(図4/STEP100)。 In this process, first, the virtual environment generation unit 30 of the server 3 generates a virtual space VS1 and an avatar A (FIG. 4/STEP 100).
 具体的には、図3に示すように、仮想環境生成部30の仮想空間生成部30aは、仮想空間VS1の背景となる画像及び仮想空間VS1に存在するオブジェクトOを生成する。また、仮想環境生成部30のアバター生成部30bは、ユーザUに対応するアバターAを生成する。 Specifically, as shown in FIG. 3, the virtual space generation unit 30a of the virtual environment generation unit 30 generates an image that serves as the background of the virtual space VS1 and an object O that exists in the virtual space VS1. In addition, the avatar generation unit 30b of the virtual environment generation unit 30 generates an avatar A that corresponds to the user U.
 次に、サーバ3のアバター状態制御部32は、ユーザUの状態に基づいて、アバターAの状態を決定する(図4/STEP101)。 Next, the avatar state control unit 32 of the server 3 determines the state of avatar A based on the state of user U (Figure 4/STEP 101).
 このSTEP101以降の処理におけるユーザUの状態は、カメラ2によって撮影された画像データに基づいてサーバ3のユーザ状態認識部31によって認識された状態が用いられる。 The state of user U in the processing from STEP 101 onwards is the state recognised by the user state recognition unit 31 of the server 3 based on the image data captured by the camera 2.
 次に、サーバ3の環境決定部33は、アバターAの状態に基づいて、アバターAの環境を決定する(図4/STEP102)。 Next, the environment determination unit 33 of the server 3 determines the environment of avatar A based on the state of avatar A (FIG. 4/STEP 102).
 次に、環境決定部33は、ユーザUに認識させる環境を、アバターAの環境に基づいて決定する(図4/STEP103)。 Next, the environment determination unit 33 determines the environment that the user U will recognize based on the environment of avatar A (Figure 4/STEP 103).
 具体的には、環境決定部33は、ユーザUに認識させる環境として、アバターAの環境を表す仮想空間VS1の画像及び音を決定する。 Specifically, the environment determination unit 33 determines the images and sounds of the virtual space VS1 that represents the environment of avatar A as the environment that the user U is to recognize.
 次に、ユーザUに装着されたHMD4は、決定された環境を出力する(図4/STEP104)。 Next, the HMD 4 worn by the user U outputs the determined environment (Figure 4/STEP 104).
 具体的には、HMD4は、HMD4に搭載されているモニタ40に決定された画像を表示させ、HMD4に搭載されているスピーカ41に決定された音を発生させる。 Specifically, the HMD 4 displays the determined image on a monitor 40 mounted on the HMD 4, and generates the determined sound from a speaker 41 mounted on the HMD 4.
 次に、サーバ3のユーザ状態認識部31は、ユーザUがなんらかの動作を行ったか否かを判断する(図4/STEP105)。 Next, the user state recognition unit 31 of the server 3 determines whether the user U has performed any action (FIG. 4/STEP 105).
 ユーザUがなんらかの動作を行った場合(STEP105でYESの場合)、STEP101に戻り、再度、STEP101以降の処理が実行される。 If user U performs some action (YES in STEP 105), the process returns to STEP 101 and the processing from STEP 101 onwards is executed again.
 一方、ユーザUがなんらかの動作を行っていなかった場合(STEP105でNOの場合)、サーバ3は、処理の終了を指示する信号を認識したか否かを判断する(図4/STEP106)。 On the other hand, if the user U has not performed any action (NO in STEP 105), the server 3 determines whether or not it has recognized a signal instructing the end of processing (FIG. 4/STEP 106).
 VRシステムS1は、終了を指示する信号を認識できなかった場合(STEP106でNOの場合)、STEP105に戻り、再度、STEP105以降の処理が実行される。 If the VR system S1 does not recognize the signal instructing termination (NO in STEP 106), it returns to STEP 105 and executes the processing from STEP 105 onwards again.
 一方、終了を指示する信号を認識した場合(STEP106でYESの場合)、VRシステムS1は、今回の処理を終了する。 On the other hand, if a signal instructing termination is recognized (YES in STEP 106), the VR system S1 ends this processing.
[第2実施形態]
 以下、図5を参照して、第2実施形態に係るVRシステムについて説明する。
[Second embodiment]
The VR system according to the second embodiment will be described below with reference to FIG.
 なお、本実施形態のVRシステムは、仮想空間生成部が生成する仮想空間VS2の形状が第1実施形態のVRシステムSの仮想空間生成部30aが生成する仮想空間VS1の形状と異なることを除き、第1実施形態のVRシステムS1と同様の構成を備えている。 The VR system of this embodiment has a similar configuration to the VR system S1 of the first embodiment, except that the shape of the virtual space VS2 generated by the virtual space generation unit is different from the shape of the virtual space VS1 generated by the virtual space generation unit 30a of the VR system S of the first embodiment.
 そのため、以下の説明においては、生成される仮想空間VS2についてのみ説明する。また、第1実施形態のVRシステムS1と同一の構成又は対応する構成については、同一の符号を付すとともに、詳細な説明は省略する。 For this reason, in the following explanation, only the generated virtual space VS2 will be described. Furthermore, the same reference numerals will be used for the same components as or corresponding components to the VR system S1 of the first embodiment, and detailed explanations will be omitted.
 図5に示すように、仮想空間VS2は、離間して配置された2つの直方体状の領域によって構成されている。 As shown in FIG. 5, the virtual space VS2 is composed of two rectangular parallelepiped regions spaced apart from each other.
 具体的には、仮想空間VS2は、直方体状の領域である第1仮想領域V2a(一点鎖線で区切られている領域)と、第1仮想領域V2aの側方であって後方且つ上方にずれた位置に位置している直方体状の領域である第2仮想領域V2b(二点鎖線で区切られている空間)とによって構成されている。 Specifically, the virtual space VS2 is composed of a first virtual area V2a (area bounded by a dashed line) which is a rectangular parallelepiped area, and a second virtual area V2b (space bounded by a dashed line) which is a rectangular parallelepiped area located to the side of the first virtual area V2a, rearward and upwardly.
 第1仮想領域V2aは、現実空間RSの第1現実領域Ra全体と、第2現実領域Rbの第1現実領域Ra側の縁部(図5においては左上側)の領域に対応する領域として生成される。そのため、第1仮想空間V2aの縁部(第1重複領域V2d)を除く部分の形状と第1現実領域Raの形状とは、同一形状又は相似形状となっている。 The first virtual area V2a is generated as an area corresponding to the entire first real area Ra of the real space RS and the area of the edge of the second real area Rb on the first real area Ra side (upper left side in Figure 5). Therefore, the shape of the part of the first virtual space V2a excluding the edge (first overlapping area V2d) and the shape of the first real area Ra are the same or similar.
 第2仮想領域V2bは、現実空間RSの第2現実領域Rb全体と、第1現実領域Raの第2現実領域Rb側の縁部(図5においては右下側)の領域に対応する領域として生成される。そのため、第2仮想空間V2bの縁部(第2重複領域V2e)を除く部分の形状と第2現実領域Rbの形状とは、同一形状又は相似形状となっている。 The second virtual area V2b is generated as an area corresponding to the entire second real area Rb of the real space RS and the area of the edge of the first real area Ra on the second real area Rb side (the lower right side in Figure 5). Therefore, the shape of the part of the second virtual space V2b excluding the edge (second overlapping area V2e) and the shape of the second real area Rb are the same or similar.
 ユーザUが第1現実領域Raから第2現実領域Rbの縁部に進入した際には、ユーザUの身体のうち第2現実領域Rbの縁部に進入した部分に対応するアバターAの部分(図5では、アバターAの前側の半身)は、第1重複領域V2dと第2仮想領域V2bとの両方に存在することになる。 When user U enters the edge of the second real world Rb from the first real world Ra, the part of avatar A that corresponds to the part of user U's body that has entered the edge of the second real world Rb (in Figure 5, the front half of avatar A) will be present in both the first overlapping area V2d and the second virtual area V2b.
 また、同時に、ユーザUの身体のうち第1現実領域Raに残っている部分に対応するアバターAの部分(図5では、アバターAの後側の半身)は、第2重複領域V2eと第1仮想領域V2aとの両方に存在することになる。 At the same time, the part of avatar A that corresponds to the part of user U's body that remains in the first real area Ra (in Figure 5, the rear half of avatar A) will be present in both the second overlap area V2e and the first virtual area V2a.
 これらの仮想領域では、現実空間RSにおけるユーザUの状態の変化に応じて、そのユーザUに対応するアバターAの状態も変化する。そのため、ユーザUが第1現実領域Raから第2現実領域Rbに移動したときに、アバターAは、第1仮想領域V2aから、第1仮想領域V2aと第2仮想領域V2bとの位置関係に関わらず、第2仮想領域V2bへ移動することになる。 In these virtual areas, the state of the avatar A corresponding to the user U changes in response to changes in the state of the user U in the real space RS. Therefore, when the user U moves from the first real area Ra to the second real area Rb, the avatar A moves from the first virtual area V2a to the second virtual area V2b, regardless of the positional relationship between the first virtual area V2a and the second virtual area V2b.
 ここで、第2仮想領域V2bは第1仮想領域V2aの側方(図3の状態のアバターAから見て左側)であって後方且つ上方にずれた位置に位置している。 Here, the second virtual area V2b is located to the side of the first virtual area V2a (to the left as viewed from avatar A in the state shown in FIG. 3), shifted rearward and upward.
 そのため、その移動の最中においては、ユーザUは、右側方であって前方且つ下方に、自らに対応するアバターAの背面を見ることができる。 As a result, while moving, user U can see the back of avatar A, which corresponds to him or her, to the right, in front and below.
 このような仮想空間VS2を生成する第2実施形態のVRシステム及びそれを用いた仮想空間体感方法によっても、第1実施形態のVRシステムS及びそれを用いた仮想空間体感方法と同様に、ユーザUに対し、現実空間ではあり得ない、種々様々な仮想空間を体感させることができる。また、ユーザUに、自らの移動及び動作とアバターAの移動及び動作との間で、違和感を与えにくくすることができるので、自らが仮想空間に存在しているとの認識(すなわち、没入感)を維持することができる。 The VR system of the second embodiment that generates such a virtual space VS2 and the method of experiencing a virtual space using it can allow the user U to experience a variety of virtual spaces that would not be possible in real space, just like the VR system S of the first embodiment and the method of experiencing a virtual space using it. In addition, it is possible to make it less likely for the user U to feel uncomfortable between his or her own movements and actions and those of the avatar A, so that the user U can maintain the awareness that he or she is present in a virtual space (i.e., a sense of immersion).
[第3実施形態]
 以下、図6を参照して、第3実施形態に係るVRシステムについて説明する。
[Third embodiment]
Hereinafter, the VR system according to the third embodiment will be described with reference to FIG.
 なお、本実施形態のVRシステムは、仮想空間生成部が生成する仮想空間VS3の形状が第1実施形態のVRシステムSの仮想空間生成部30aが生成する仮想空間VS1の形状と異なることを除き、第1実施形態のVRシステムS1と同様の構成を備えている。 The VR system of this embodiment has a similar configuration to the VR system S1 of the first embodiment, except that the shape of the virtual space VS3 generated by the virtual space generation unit is different from the shape of the virtual space VS1 generated by the virtual space generation unit 30a of the VR system S of the first embodiment.
 そのため、以下の説明においては、生成される仮想空間VS3についてのみ説明する。また、第1実施形態のVRシステムS1と同一の構成又は対応する構成については、同一の符号を付すとともに、詳細な説明は省略する。 For this reason, in the following explanation, only the generated virtual space VS3 will be described. Furthermore, the same reference numerals will be used for the same configuration as or corresponding configuration to the VR system S1 of the first embodiment, and detailed explanations will be omitted.
 図6に示すように、仮想空間VS3は、全体として直方体状の空間であり、一部が重なるようにして配置された2つの直方体状の領域によって構成されている。 As shown in Figure 6, the virtual space VS3 is a rectangular parallelepiped space overall, and is composed of two rectangular parallelepiped regions that are arranged so that some of them overlap.
 具体的には、仮想空間VS3は、直方体状の領域である第1仮想領域V3a(一点鎖線で区切られている領域)と、一方側(図6においては左上側である図面奥側)の縁部(第2重複領域V3e)が、第1仮想領域V3aの他方側(図6においては右下側である図面手前側)の縁部(第1重複領域V3d)と重なるように位置している直方体状の領域である第2仮想領域V3b(二点鎖線で区切られている空間)とによって構成されている。 Specifically, the virtual space VS3 is composed of a first virtual area V3a (area separated by a dashed line) which is a rectangular parallelepiped area, and a second virtual area V3b (space separated by a dashed line) which is a rectangular parallelepiped area whose edge (second overlapping area V3e) on one side (the upper left side in FIG. 6, the back side of the drawing) overlaps with the edge (first overlapping area V3d) on the other side (the lower right side in FIG. 6, the front side of the drawing) of the first virtual area V3a.
 第1仮想領域V3aは、現実空間RSの第1現実領域Ra全体と、第2現実領域Rbの第1現実領域Ra側の縁部(図6においては左上側)の領域に対応する領域として生成される。そのため、第1仮想空間V3aの縁部(第1重複領域V3d)を除く部分の形状と第1現実領域Raの形状とは、同一形状又は相似形状となっている。 The first virtual area V3a is generated as an area corresponding to the entire first real area Ra of the real space RS and the area of the edge of the second real area Rb on the first real area Ra side (upper left side in Figure 6). Therefore, the shape of the part of the first virtual space V3a excluding the edge (first overlapping area V3d) and the shape of the first real area Ra are the same or similar.
 第2仮想領域V3bは、現実空間RSの第2現実領域Rb全体と、第1現実領域Raの第2現実領域Rb側の縁部(図6においては右下側)の領域に対応する領域として生成される。そのため、第2仮想空間V3bの縁部(第2重複領域V3e)を除く部分の形状と第2現実領域Rbの形状とは、同一形状又は相似形状となっている。 The second virtual area V3b is generated as an area corresponding to the entire second real area Rb of the real space RS and the area of the edge of the first real area Ra on the second real area Rb side (the lower right side in Figure 6). Therefore, the shape of the part of the second virtual space V3b excluding the edge (second overlap area V3e) and the shape of the second real area Rb are the same or similar.
 ユーザUが第1現実領域Raから第2現実領域Rbの縁部に進入した際には、ユーザUの身体のうち第2現実領域Rbの縁部に進入した部分に対応するアバターAの部分(図6では、アバターAの前側の半身)は、第1重複領域V3dと第2仮想領域V3bとの両方に存在することになる。 When user U enters the edge of the second real world Rb from the first real world Ra, the part of avatar A that corresponds to the part of user U's body that has entered the edge of the second real world Rb (in Figure 6, the front half of avatar A) will be present in both the first overlapping area V3d and the second virtual area V3b.
 また、同時に、ユーザUの身体のうち第1現実領域Raに残っている部分に対応するアバターAの部分(図6では、アバターAの後側の半身)は、第2重複領域V3eと第1仮想領域V3aとの両方に存在することになる。 At the same time, the part of avatar A that corresponds to the part of user U's body that remains in the first real area Ra (in Figure 6, the rear half of avatar A) will be present in both the second overlap area V3e and the first virtual area V3a.
 これらの仮想領域では、現実空間RSにおけるユーザUの状態の変化に応じて、そのユーザUに対応するアバターAの状態も変化する。そのため、ユーザUが第1現実領域Raから第2現実領域Rbに移動したときに、アバターAは、第1仮想領域V3aから、第2仮想領域V3bへ移動することになる。 In these virtual areas, the state of the avatar A corresponding to the user U changes in response to a change in the state of the user U in the real space RS. Therefore, when the user U moves from the first real area Ra to the second real area Rb, the avatar A moves from the first virtual area V3a to the second virtual area V3b.
 ここで、第1現実領域Raの座標軸と第1仮想領域V3aの座標軸との対応関係は、第2現実領域Rbの座標軸と第2仮想領域V3bの座標軸との対応関係と、異なっている。具体的には、第1仮想空間V3aの仮想軸は、対応する現実空間RSの座標軸と同じ向きとなっているのに対し、第2仮想空間V3bの座標軸は、対応する現実空間RSの座標軸と、鉛直方向において、逆転した向きになっており、天地が逆転している。 Here, the correspondence between the coordinate axes of the first real area Ra and the coordinate axes of the first virtual area V3a is different from the correspondence between the coordinate axes of the second real area Rb and the coordinate axes of the second virtual area V3b. Specifically, the virtual axes of the first virtual space V3a are oriented in the same direction as the coordinate axes of the corresponding real space RS, whereas the coordinate axes of the second virtual space V3b are oriented in the vertical direction opposite to the coordinate axes of the corresponding real space RS, resulting in an upside-down relationship.
 また、第1仮想領域V3aの縁部である第1重複領域V3dと、第2仮想領域V3bの縁部である第2重複領域V3eとは、重なるように位置している。 Furthermore, the first overlapping region V3d, which is the edge of the first virtual region V3a, and the second overlapping region V3e, which is the edge of the second virtual region V3b, are positioned so as to overlap.
 そのため、その移動の最中においては、ユーザUは、上方側又は下方側に、自らに対応するアバターAの頭頂部側の面(すなわち、天地の逆転した自らのアバターA)を見ることができる。 As a result, while moving, user U can see the top side of avatar A corresponding to him/her (i.e., his/her own avatar A upside down) above or below him/her.
 このような仮想空間VS2を生成する第2実施形態のVRシステム及びそれを用いた仮想空間体感方法によっても、第1実施形態のVRシステムS及びそれを用いた仮想空間体感方法と同様に、ユーザUに対し、現実空間ではあり得ない、種々様々な仮想空間を体感させることができる。また、ユーザUに、自らの移動及び動作とアバターAの移動及び動作との間で、違和感を与えにくくすることができるので、自らが仮想空間に存在しているとの認識(すなわち、没入感)を維持することができる。 The VR system of the second embodiment that generates such a virtual space VS2 and the method of experiencing a virtual space using it can allow the user U to experience a variety of virtual spaces that would not be possible in real space, just like the VR system S of the first embodiment and the method of experiencing a virtual space using it. In addition, it is possible to make it less likely for the user U to feel uncomfortable between his or her own movements and actions and those of the avatar A, so that the user U can maintain the awareness that he or she is present in a virtual space (i.e., a sense of immersion).
 なお、本実施形態においては、所定の仮想領域の座標軸を、現実空間の座標軸に対して、鉛直方向において逆転した向きになった場合を説明した。しかし、本発明における座標軸の対応関係の変化は、そのような鉛直方向における逆転のみに限定されるものではない。そのため、例えば、仮想領域の座標軸を、現実空間の座標軸に対して、横倒しにしてもよいし、所定方向に回転させてもよい。 In this embodiment, a case has been described in which the coordinate axes of a specified virtual area are inverted in the vertical direction relative to the coordinate axes of real space. However, changes in the correspondence relationship of the coordinate axes in this invention are not limited to such inversion in the vertical direction. Therefore, for example, the coordinate axes of the virtual area may be turned sideways relative to the coordinate axes of real space, or may be rotated in a specified direction.
[その他の実施形態]
 以上、図示の実施形態について説明したが、本発明はこのような形態に限定されるものではない。
[Other embodiments]
Although the illustrated embodiment has been described above, the present invention is not limited to this embodiment.
 例えば、上記第1実施形態及び第2実施形態では、2つの仮想領域が離間しているが、それらの仮想領域の座標軸は現実空間の座標軸と同一である場合について説明した。しかし、第1実施形態及び第2実施形態においても、第3実施形態のように、いずれかの仮想領域の座標軸を、現実空間の座標軸と異ならせてもよい。 For example, in the above first and second embodiments, a case has been described in which two virtual areas are separated from each other, but the coordinate axes of those virtual areas are the same as the coordinate axes of real space. However, in the first and second embodiments, the coordinate axes of one of the virtual areas may be made different from the coordinate axes of real space, as in the third embodiment.
 また、上記のいずれの実施形態においても、第1仮想空間の縁部(第1重複領域)を除く部分の形状と第1現実領域の形状とは、同一形状又は相似形状となっており、第2仮想空間の縁部(第2重複領域)を除く部分の形状と第2現実領域の形状とは、同一形状又は相似形状となっている。 In addition, in any of the above embodiments, the shape of the first virtual space excluding the edge (first overlapping area) and the shape of the first real area are the same or similar, and the shape of the second virtual space excluding the edge (second overlapping area) and the shape of the second real area are the same or similar.
 しかし、本発明の仮想空間体感システム及び仮想空間体感方法は、このような構成に限定されるものではなく、第1仮想領域及び第2仮想領域のいずれか一方を、対応する現実領域とは、同一形状又は相似形状としなくてもよい。ただし、そのように構成する場合は、現実領域に対する仮想領域の変形の度合いを、2つの仮想領域で同程度にすると、没入感を阻害しにくくなるので、好ましい。 However, the virtual space experience system and method of the present invention are not limited to such a configuration, and either the first virtual area or the second virtual area does not have to be the same shape or a similar shape to the corresponding real area. However, if configured in this way, it is preferable to make the degree of deformation of the virtual area relative to the real area the same for the two virtual areas, as this is less likely to impede the sense of immersion.
1…標識、2…カメラ、3…サーバ、4…HMD(環境出力器)、30…仮想環境生成部、30a…仮想空間生成部、30b…アバター生成部、31…ユーザ状態認識部、31a…ユーザ姿勢認識部、31b…ユーザ座標認識部、32…アバター状態制御部、33…環境決定部、40…モニタ、41…スピーカ、A…アバター、O…オブジェクト、RS…現実空間、Ra…第1現実領域、Rb…第2現実領域、S…VRシステム(仮想空間体感システム)、U…ユーザ、VS1,VS2,VS3…仮想空間、V1a,V2a…第1仮想領域、V1b,V2b…第2仮想領域、V1c…第3仮想領域、V1d,V2d…第1重複領域、V1e,V2e…第2重複領域。 1...sign, 2...camera, 3...server, 4...HMD (environment output device), 30...virtual environment generation unit, 30a...virtual space generation unit, 30b...avatar generation unit, 31...user state recognition unit, 31a...user posture recognition unit, 31b...user coordinate recognition unit, 32...avatar state control unit, 33...environment determination unit, 40...monitor, 41...speaker, A...avatar, O...object, RS...real space, Ra...first real area, Rb...second real area, S...VR system (virtual space experience system), U...user, VS1, VS2, VS3...virtual space, V1a, V2a...first virtual area, V1b, V2b...second virtual area, V1c...third virtual area, V1d, V2d...first overlapping area, V1e, V2e...second overlapping area.

Claims (7)

  1.  ユーザが存在する現実空間に対応する仮想空間を生成する仮想空間生成部と、
     前記ユーザに対応するアバターを前記仮想空間に生成するアバター生成部と、
     前記ユーザの位置及び動作を認識するユーザ状態認識部と、
     前記ユーザの位置及び動作に基づいて、前記アバターの位置及び動作を制御するアバター状態制御部と、
     前記アバターの位置及び動作に基づいて、前記ユーザに認識させる仮想空間における環境を決定する環境決定部とを備え、
     前記仮想空間の環境を出力する環境出力器を介して、前記ユーザに該仮想空間を体感させる仮想空間体感システムにおいて、
     前記仮想空間生成部は、前記仮想空間に、前記現実空間の第1現実領域に対応する第1仮想領域と、該第1現実領域に隣接する前記現実空間の第2現実領域に対応する第2仮想領域とを生成し、
     前記第1仮想領域と前記第2仮想領域との位置関係は、前記第1現実領域と前記第2現実領域との位置関係と、異なっていることを特徴とする仮想空間体感システム。
    a virtual space generation unit that generates a virtual space corresponding to a real space in which a user exists;
    an avatar generation unit that generates an avatar corresponding to the user in the virtual space;
    A user state recognition unit that recognizes the position and action of the user;
    an avatar state control unit that controls a position and a movement of the avatar based on a position and a movement of the user;
    an environment determination unit that determines an environment in the virtual space to be recognized by the user based on a position and a movement of the avatar;
    A virtual space experiencing system that allows a user to experience a virtual space via an environment output device that outputs an environment of the virtual space,
    the virtual space generation unit generates, in the virtual space, a first virtual area corresponding to a first real area in the real space and a second virtual area corresponding to a second real area in the real space adjacent to the first real area;
    A virtual space experiencing system, characterized in that a positional relationship between the first virtual area and the second virtual area is different from a positional relationship between the first real area and the second real area.
  2.  請求項1に記載の仮想空間体感システムにおいて、
     前記第1仮想領域と前記第2仮想領域とは、離間し、
     前記第1仮想領域と前記第2仮想領域との間には、前記現実空間に対応していない、又は前記第1現実領域及び前記第2現実領域とは独立した第3現実領域に対応する第3仮想領域が配置されていることを特徴とする仮想空間体感システム。
    2. The virtual space experience system according to claim 1,
    The first virtual area and the second virtual area are spaced apart from each other,
    A virtual space experience system characterized in that a third virtual area is located between the first virtual area and the second virtual area, the third virtual area corresponding to a third real area that does not correspond to the real space or is independent of the first real area and the second real area.
  3.  請求項1に記載の仮想空間体感システムにおいて、
     前記第1仮想領域の縁部は、前記第2現実領域の縁部に対応していることを特徴とする仮想空間体感システム。
    2. The virtual space experience system according to claim 1,
    A virtual space experiencing system, characterized in that an edge of the first virtual area corresponds to an edge of the second real area.
  4.  請求項3に記載の仮想空間体感システムにおいて、
     前記第1仮想領域の縁部の色調と前記第1仮想領域の他の部分の色調とは、異なっていることを特徴とする仮想空間体感システム。
    4. The virtual space experience system according to claim 3,
    A virtual space experiencing system, characterized in that the color tone of the edge of the first virtual area is different from the color tone of other parts of the first virtual area.
  5.  請求項3に記載の仮想空間体感システムにおいて、
     前記アバターが前記第1仮想領域の縁部に進入した際に、前記アバターの前記第1仮想領域の縁部に位置している部分は、前記アバターの他の部分とは異なる形態となることを特徴とする仮想空間体感システム。
    4. The virtual space experience system according to claim 3,
    A virtual space experience system characterized in that when the avatar enters the edge of the first virtual area, the part of the avatar located at the edge of the first virtual area takes on a form different from other parts of the avatar.
  6.  請求項1に記載の仮想空間体感システムにおいて、
     前記第1現実領域の座標軸と前記第1仮想領域の座標軸との対応関係は、前記第2現実領域の座標軸と前記第2仮想領域の座標軸との対応関係と、異なっていることを特徴とする仮想空間体感システム。
    2. The virtual space experience system according to claim 1,
    A virtual space experiencing system, characterized in that the correspondence between the coordinate axes of the first real area and the coordinate axes of the first virtual area is different from the correspondence between the coordinate axes of the second real area and the coordinate axes of the second virtual area.
  7.  仮想空間生成部が、ユーザが存在する現実空間に対応する仮想空間を生成するステップと、
     アバター生成部が、前記ユーザに対応するアバターを前記仮想空間に生成するステップと、
     ユーザ状態認識部が、前記ユーザの位置及び動作を認識するステップと、
     アバター状態制御部が、前記ユーザの位置及び動作に基づいて、前記アバターの位置及び動作を制御するステップと、
     環境決定部が、前記アバターの位置及び動作に基づいて、前記ユーザに認識させる仮想空間における環境を決定するステップと、
     環境出力器が、前記ユーザに前記仮想空間の環境を出力して体感させるステップとを備えている仮想空間体感方法において、
     前記仮想空間生成部は、前記仮想空間に、前記現実空間の第1現実領域に対応する第1仮想領域と、該第1現実領域に隣接する前記現実空間の第2現実領域に対応する第2仮想領域とを生成し、
     前記第1仮想領域と前記第2仮想領域との位置関係は、前記第1現実領域と前記第2現実領域との位置関係と、異なっていることを特徴とする仮想空間体感方法。
    A step in which a virtual space generation unit generates a virtual space corresponding to a real space in which a user exists;
    an avatar generation unit generating an avatar corresponding to the user in the virtual space;
    A user state recognition unit recognizes a position and a movement of the user;
    an avatar state control unit controlling a position and a movement of the avatar based on a position and a movement of the user;
    an environment determination unit determining an environment in a virtual space to be recognized by the user based on a position and a movement of the avatar;
    a step of outputting an environment of the virtual space to the user by an environment output device,
    the virtual space generation unit generates, in the virtual space, a first virtual area corresponding to a first real area in the real space and a second virtual area corresponding to a second real area in the real space adjacent to the first real area;
    A method for experiencing a virtual space, characterized in that the positional relationship between the first virtual area and the second virtual area is different from the positional relationship between the first real area and the second real area.
PCT/JP2022/043614 2022-11-25 2022-11-25 Virtual space experience system and virtual space experience method WO2024111123A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/043614 WO2024111123A1 (en) 2022-11-25 2022-11-25 Virtual space experience system and virtual space experience method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/043614 WO2024111123A1 (en) 2022-11-25 2022-11-25 Virtual space experience system and virtual space experience method

Publications (1)

Publication Number Publication Date
WO2024111123A1 true WO2024111123A1 (en) 2024-05-30

Family

ID=91195952

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/043614 WO2024111123A1 (en) 2022-11-25 2022-11-25 Virtual space experience system and virtual space experience method

Country Status (1)

Country Link
WO (1) WO2024111123A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016062486A (en) * 2014-09-19 2016-04-25 株式会社ソニー・コンピュータエンタテインメント Image generation device and image generation method
JP2017084215A (en) * 2015-10-30 2017-05-18 キヤノンマーケティングジャパン株式会社 Information processing system, control method thereof, and program
JP2018109835A (en) * 2016-12-28 2018-07-12 株式会社バンダイナムコエンターテインメント Simulation system and its program
JP2018120312A (en) * 2017-01-23 2018-08-02 ティフォン インコーポレーテッドTyffon Inc. Display unit, display method and display program thereof, and amusement facility
JP2019175323A (en) * 2018-03-29 2019-10-10 株式会社バンダイナムコエンターテインメント Simulation system and program

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016062486A (en) * 2014-09-19 2016-04-25 株式会社ソニー・コンピュータエンタテインメント Image generation device and image generation method
JP2017084215A (en) * 2015-10-30 2017-05-18 キヤノンマーケティングジャパン株式会社 Information processing system, control method thereof, and program
JP2018109835A (en) * 2016-12-28 2018-07-12 株式会社バンダイナムコエンターテインメント Simulation system and its program
JP2018120312A (en) * 2017-01-23 2018-08-02 ティフォン インコーポレーテッドTyffon Inc. Display unit, display method and display program thereof, and amusement facility
JP2019175323A (en) * 2018-03-29 2019-10-10 株式会社バンダイナムコエンターテインメント Simulation system and program

Similar Documents

Publication Publication Date Title
US20240004458A1 (en) Massive simultaneous remote digital presence world
CN112041788B (en) Selecting text input fields using eye gaze
JPWO2016042862A1 (en) Control device, control method and program
WO2024111123A1 (en) Virtual space experience system and virtual space experience method
WO2022004130A1 (en) Information processing device, information processing method, and storage medium
KR101990373B1 (en) Method and program for providing virtual reality image
US20200238529A1 (en) Service providing system, service providing method and management apparatus for service providing system
JP2020134629A (en) Image adjustment system, image adjustment device, and image adjustment method
KR20190077253A (en) Method and program for providing virtual reality image
WO2024111359A1 (en) Virtual space image-providing device
JP2020134632A (en) Image adjustment system, image adjustment device, and image adjustment method
JP2020134630A (en) Image adjustment system, image adjustment device, and image adjustment method
JP2020134631A (en) Image adjustment system, image adjustment device, and image adjustment method

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2023521671

Country of ref document: JP

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22966531

Country of ref document: EP

Kind code of ref document: A1