[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2022091589A1 - Information processing device, information processing method, and program - Google Patents

Information processing device, information processing method, and program Download PDF

Info

Publication number
WO2022091589A1
WO2022091589A1 PCT/JP2021/032984 JP2021032984W WO2022091589A1 WO 2022091589 A1 WO2022091589 A1 WO 2022091589A1 JP 2021032984 W JP2021032984 W JP 2021032984W WO 2022091589 A1 WO2022091589 A1 WO 2022091589A1
Authority
WO
WIPO (PCT)
Prior art keywords
user
information processing
display
content
control unit
Prior art date
Application number
PCT/JP2021/032984
Other languages
French (fr)
Japanese (ja)
Inventor
陽方 川名
茜 近藤
保乃花 尾崎
Original Assignee
ソニーグループ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーグループ株式会社 filed Critical ソニーグループ株式会社
Publication of WO2022091589A1 publication Critical patent/WO2022091589A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/033Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
    • G06F3/0346Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance

Definitions

  • This disclosure relates to information processing devices, information processing methods, and programs.
  • Patent Document 1 below discloses a technique for encouraging a user to take an action to improve the projection environment.
  • an appropriate projection location is determined from a plurality of candidates, the user's line of sight is guided to the projection location, and obstacles placed at the projection location are excluded from the user.
  • Action attraction such as letting is performed.
  • Patent Document 1 when urging the user to take a predetermined action, a one-sided instruction is explicitly given, which may give a feeling of coercion to the user.
  • the user is implicitly guided to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user.
  • the processor implicitly places the user in a particular viewing location in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user.
  • an information processing method that includes performing display control that guides the user to a display device.
  • the computer implicitly places the user in a particular viewing location in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user.
  • a program that functions as a control unit that performs display control guided by the display device.
  • FIG. 1 is a diagram illustrating an outline of an information processing system according to an embodiment of the present disclosure. As shown in FIG. 1, the information processing system according to the present embodiment includes a projector 210, a camera 310, and an information processing device 100.
  • the projector 210 is a display device that projects (displays) an image at an arbitrary place in real space.
  • the projector 210 projects an image on an arbitrary place such as a floor, a wall, a desk, a table, or a sofa in a conference room or a living space.
  • the projector 210 is an example of the output device 200 (FIG. 3).
  • the projector 210 projects an image in the real space included in the projection angle of view.
  • the image to be projected is output from the information processing apparatus 100.
  • the projected angle of view means a projectable range, and is also referred to as a "projection area" in the present specification.
  • the projection area is an example of a display area.
  • the projection area is defined by the position of the projector 210, the projection direction, and the angle of the projectable range about the projection direction as the central axis.
  • the image projected by the projector 210 is also referred to as a projected image.
  • the projected image is an example of a display image.
  • the projection area 211 is on the floor surface in the real space.
  • the present embodiment is not limited to this, and the projection area 211 may be provided on the wall or ceiling of the real space. Further, the projection area 211 may be provided in a plurality of places such as a floor surface and a wall (the range of the projection area 211 may be a width including a plurality of places such as a floor surface and a wall).
  • the projector 210 may have a mechanism for driving in any direction (for example, a pan / tilt mechanism). Further, a plurality of projectors 210 may be provided in the real space. With the plurality of projectors 210, it is possible to set a wider range as the projection area 211.
  • the camera 310 is an imaging device that captures images in real space.
  • the camera 310 has a lens system, a drive system, and an image pickup element such as an RGB (red, green, blue) camera, an IR (infrared) camera, etc., and captures an image (still image or moving image).
  • the camera 310 is an example of the sensor 300 (FIG. 3).
  • the camera 310 captures the real space included in the imaging angle of view 311.
  • the image pickup angle of view 311 means an image pickup range, and is defined by the installation position of the camera 310, the image pickup direction, and the angle of the image pickup range centered on the image pickup direction.
  • the image captured by the camera 310 is also referred to as a captured image.
  • the image pickup angle of view 311 of the camera 310 according to the present embodiment may be a range including at least the projection area 211. Further, the imaged angle of view 311 may be a range including the entire real space.
  • the camera 310 may have a mechanism for driving in any direction (for example, a pan / tilt mechanism).
  • the camera 310 may be fixed to the projector 210 so that the imaging direction of the camera 310 is aligned with the projection direction of the projector 210. Further, the camera 310 may be provided at a position different from that of the projector 210. Further, a plurality of cameras 310 may be provided in the real space. With the plurality of cameras 310, it is possible to set the imaging angle of view 311 in a wider range. The captured image captured by the camera 310 is output to the information processing apparatus 100.
  • the information processing apparatus 100 communicates with a projector 210 and a camera 310 arranged in a real space, controls the projection of an image into the real space by the projector 210, and acquires an image captured by the camera 310. do.
  • the information processing apparatus 100 can perform real space recognition (spatial recognition) based on the captured image acquired from the camera 310.
  • the information processing apparatus 100 communicates with the speaker and controls the audio output from the speaker.
  • the speaker is an example of the output device 200 (FIG. 3).
  • the speaker may be a directional speaker.
  • the speaker may be a unit integrated with the projector 210, may be arranged in a place different from the projector 210 in the real space, or may be provided in a personal terminal such as a smartphone or a mobile phone. good.
  • FIG. 2 is a diagram illustrating a place suitable for viewing a projected image.
  • the display is controlled so that the orientation of the projected image faces the user's body.
  • the shadow of the user appears on the projector 210 (light source) and the extension line of the user. Therefore, for example, when the user is located between the projector 210 and the projection area 211 (area E1) as shown in the upper left of FIG. 2, a shadow is generated in the same direction as the user's line of sight, which hinders the viewing of the projected image 500a. ..
  • area E1 area E1
  • the projection area 211 since the projection area 211 is located between the projector 210 and the user, the shadow of the user is generated in a direction different from the line-of-sight direction of the user (outside the visual field area), and the projected image 500b. It does not interfere with the viewing of. However, since the position of the user is the area E2 on the lateral side of the projection area 211, the projected image 500b may be reduced and displayed, and the visibility may be reduced.
  • the size of the projection area 211 is wide even if the position does not cause a shadow in the line-of-sight direction and the body orientation. May not be fully utilized and the projected image 500d may be reduced and displayed, which may reduce visibility. Especially when projecting an image in a living space, it is assumed that the user walks to various positions.
  • FIG. 3 is a diagram showing an example of the configuration of the information processing system according to the present embodiment.
  • the information processing system according to the present embodiment includes an information processing device 100, an output device 200, and a sensor 300.
  • the output device 200 is a device that presents the information received from the information processing device 100 to the user in the real space.
  • the output device 200 is realized by, for example, a projector 210.
  • the projector 210 is an example of a display device.
  • the output device 200 is not shown in FIG. 3, the user can obtain some information such as an audio output device (speaker), a lighting device, a vibration device, a wind output device, an air conditioning device, various actuators, and the like. It may further include a device that can be presented in. Further, a plurality of output devices 200 may exist in the space.
  • the projector 210 may be, for example, a device having a drive mechanism and capable of projecting in any direction. By having such a mechanism, it is possible to project an image not only at one place but also at various places.
  • the projector 210 may include a component capable of output other than the display.
  • the projector 210 may be combined with a sound output device such as a speaker.
  • the speaker may be a unidirectional speaker capable of forming directivity in a single direction. The unidirectional speaker outputs sound in the direction in which the user is, for example.
  • the projector 210 is an example of a display device
  • a display device other than the projector 210 may be used as the display device included in the output device 200.
  • the display device may be a display provided on a floor, a wall, a table, or the like in a real space.
  • the display device may be a touch panel display capable of detecting a user's touch operation.
  • the display device may be a TV device arranged in a real space.
  • the display device may be a device worn by the user.
  • the device worn by the user may be, for example, a glasses-type device provided with a transmissive display or a wearable device such as an HMD (Head Mounted Display) worn on the user's head.
  • the display device may be a personal terminal such as a smartphone, a tablet terminal, a mobile phone, or a PC (personal computer).
  • the sensor 300 is provided in the real space, detects information about the environment and the user from the real space, and outputs the detected information (sensing data) to the information processing apparatus 100.
  • the sensor 300 has three-dimensional information in the real space (shape of the real space, arrangement and shape of a real object such as furniture, etc.), information in the projection area (size and location), and state of the projection surface (coarse).
  • Sensing environmental information such as pod material, color, etc.), illuminance environment, and volume.
  • the sensor 300 senses information about the user such as the presence / absence of the user, the number of people, the position, the posture, the line-of-sight direction, the direction of the face, and the gesture of the fingers.
  • the sensor 300 may be single or plural. Further, the sensor 300 may be provided in the output device 200.
  • the senor 300 is realized by, for example, a camera 310 and a distance measuring sensor 320.
  • the camera 310 and the distance measuring sensor 320 may be installed on a ceiling, a wall, a table, or the like in a real space, or may be worn by a user. Further, a plurality of cameras 310 and a plurality of distance measuring sensors 320 may be provided.
  • the camera 300 captures one or more users in the space and the projection area 211, and outputs the captured image to the information processing apparatus 100.
  • the camera 300 may be single or plural.
  • a camera for environment recognition, a camera for user recognition, and a camera for shooting a projection area may be arranged in a real space.
  • the imaging wavelength is not limited to the visible light region, and may include ultraviolet rays and infrared rays, or may be limited to a specific wavelength region.
  • the camera 300 may be an RGB-IR camera in which an RGB camera and an IR camera are combined.
  • the information processing apparatus 100 can simultaneously acquire a visible light image (also referred to as an RGB image or a color image) and an IR image.
  • the distance measuring sensor 320 detects the distance information (depth data) in the space and outputs it to the information processing apparatus 100.
  • the distance measuring sensor 320 may be realized by a depth sensor that can acquire a three-dimensional image that can comprehensively recognize three-dimensional information in space and can be driven by a mechanical mechanism. Further, the distance measuring sensor 320 may be realized by a method using infrared light as a light source, a method using ultrasonic waves, a method using a plurality of cameras, a method using image processing, or the like.
  • the range-finding sensor 320 may be a device that acquires depth information such as an infrared range-finding device, an ultrasonic range-finding device, LiDAR (Laser Imaging Detection and Ringing), or a stereo camera.
  • the distance measuring sensor 320 may be a ToF (Time Of Flight) camera capable of acquiring a highly accurate distance image.
  • the distance measuring sensor 320 may be a single sensor or a plurality of sensors 320, or the distance information in the space may be collectively acquired.
  • the camera 310 and the distance measuring sensor 320 that realize the sensor 300 may be provided at different places or may be provided at the same place. Further, the sensor 300 is not limited to the camera 310 and the distance measuring sensor 320, and may be further realized by an illuminance sensor or a microphone. Further, the sensor 300 may be further realized by a touch sensor provided in the projection area 211 and detecting a user operation on the projection area 211.
  • the information processing apparatus 100 includes an I / F (Interface) unit 110, a control unit 120, an input unit 130, and a storage unit 140.
  • I / F Interface
  • the I / F unit 110 is a connection device for connecting the information processing device 100 and other devices.
  • the I / F unit 110 includes, for example, a USB (Universal Serial Bus) connector, a wired / wireless LAN (Local Area Network), a Wi-Fi (registered trademark), a Bluetooth (registered trademark), a ZigBee (registered trademark), and a mobile communication network (registered trademark). It is realized by LTE (Long Term Evolution), 3G (3rd generation mobile communication method), 4G (4th generation mobile communication method), 5G (5th generation mobile communication method), and the like.
  • the I / F unit 110 inputs / outputs information to / from the projector 210, the camera 310, and the distance measuring sensor, respectively.
  • Control unit 120 The control unit 120 functions as an arithmetic processing unit and a control device, and controls the overall operation in the information processing device 100 according to various programs.
  • the control unit 120 is realized by an electronic circuit such as a CPU (Central Processing Unit) or a microprocessor. Further, the control unit 120 may include a ROM (Read Only Memory) for storing programs to be used, calculation parameters, and the like, and a RAM (Random Access Memory) for temporarily storing parameters and the like that change as appropriate.
  • ROM Read Only Memory
  • RAM Random Access Memory
  • the control unit 120 functions as a recognition unit 121, a viewing location determination unit 122, and a display control unit 123.
  • the recognition unit 121 recognizes the real space environment and the user based on various sensing data (captured image, depth data, etc.) detected by the sensor 300.
  • the recognition unit 121 may include the shape of the real space, the existence of a real object (arrangement of home appliances and furniture, shape, etc.), the state of the projection surface (roughness, color, reflectance, etc.), the position of the projection area, and the like. It recognizes the size, obstacles placed in the projection area, obstacles that block the projected light to the projection area (home appliances, furniture, users, etc.), illuminance environment, sound environment, etc.
  • the recognition unit 121 detects the presence / absence of a user, the position, the number of people, the posture, the line-of-sight direction, the face orientation, the gesture of the fingers, the user operation, etc. as the user recognition process.
  • the user operation include a touch operation on the projected image (projected surface) and an operation on the projected image (projected surface) by a digital pen provided with an IR light emitting unit or the like on the pen tip.
  • a user operation an operation using a controller or a laser beam can be mentioned.
  • the recognition unit 121 may acquire the user's spoken voice with a microphone and recognize the voice input by the user.
  • the viewing place determination unit 122 determines an appropriate viewing place for the user. For example, the viewing location determination unit 122 determines an appropriate viewing location for the user based on the position of the projector 210 and the position of the projection area 211. Specifically, the appropriate viewing place is assumed to be a place where there is less risk of deterioration in visibility when the user visually recognizes the image displayed in the projection area 211. A place where there is less risk of deterioration of visibility is, for example, a place where a shadow does not occur in the line-of-sight direction of the user or a place where a projected image (content) can be displayed larger (a place where the projection area 211 can be effectively used). And so on.
  • the display orientation of the projected image is uniquely determined (specifically, facing the user) with respect to the user's line-of-sight direction (or face orientation, body orientation). A place where the projection area 211 can be effectively used may be determined.
  • Output control unit 123 controls the output of information from the output device 200.
  • the output control unit 123 functions as a display control unit that controls image projection (display) from the projector 210.
  • the display control unit can control the projection location and orientation of the image.
  • the output control unit 123 may further function as an audio output control unit that controls audio output from a speaker (not shown). Further, the output control unit 123 can function as a control unit of various other output devices.
  • the output control unit 123 can perform display control that implicitly guides the user to an appropriate viewing place as a function as a display control unit.
  • the output control unit 123 projects a human figure image in the line-of-sight direction of the user as if it were a shadow of the user, and controls the display so that the closer the person is to the place determined by the viewing place determination unit 122, the smaller the human image is. ..
  • the output control unit 123 can implicitly guide the user by controlling the display state of the content (moving image, still image, website, text, GUI, etc.). Details of the implicit induction by this embodiment will be described later.
  • the input unit 130 receives input information to the information processing device 100.
  • the input unit 130 may be an operation input unit that receives an operation instruction by the user.
  • the operation input unit may be a touch sensor, a pressure sensor, or a proximity sensor.
  • the operation input unit may have a physical configuration such as a button, a switch, and a lever.
  • the input unit 130 may be a voice input unit (microphone).
  • the storage unit 140 is realized by a ROM (Read Only Memory) that stores programs and arithmetic parameters used for processing of the control unit 120, and a RAM (Random Access Memory) that temporarily stores parameters and the like that change as appropriate.
  • the storage unit 140 stores various information input from the external device by the I / F unit 110 and various information calculated and generated by the control unit 120.
  • the information processing apparatus 100 may further have, for example, an output unit.
  • the output unit may be realized by, for example, a display unit or an audio output unit (microphone).
  • the display unit outputs an operation screen, a menu screen, or the like, and may be a display device such as a liquid crystal display (LCD: Liquid Crystal Display) or an organic EL (Electro Luminescence) display.
  • LCD Liquid Crystal Display
  • organic EL Electro Luminescence
  • the information processing device 100 may be realized by, for example, a smartphone, a tablet terminal, a PC (personal computer), an HMD, or the like. Further, the information processing device 100 may be a dedicated device arranged in the same space as the output device 200 and the sensor 300. Further, the information processing apparatus 100 may be a server (cloud server) on the Internet, or may be realized by an intermediate server, an edge server, or the like.
  • server cloud server
  • the information processing device 100 may be configured by a plurality of devices, or at least a part of the configuration may be provided in the output device 200 or the sensor 300. Further, at least a part of the configuration of the information processing device 100 is a server (cloud server) on the Internet, an intermediate server, an edge server, a dedicated device arranged in the same space as the output device 200 and the sensor 300, and a user's personal terminal. It may be provided in (smartphone, tablet terminal, PC, HMD, etc.).
  • FIG. 4 is a flowchart showing an example of the flow of the first induction processing example according to the present embodiment.
  • the first guidance processing example for example, when an image is projected in a living space, when a user enters the living space, that is, before the image projection is started (before the user starts viewing the content). Will be explained assuming.
  • the recognition unit 121 of the information processing apparatus 100 recognizes the environment of the living space based on various sensing data (captured images and depth data) detected by the camera 310 and the distance measuring sensor 320. (Step S103). Specifically, the recognition unit 121 recognizes the shape of the living space, the arrangement and shape of a real object such as furniture, and the like. The recognition unit 121 also recognizes the position of the projector 210 arranged in the living space, the location of the projection area 211, and the like. In this embodiment, it is assumed that the projection area 211 is provided on the floor surface of the living space as an example.
  • the recognition unit 121 may recognize the location and size of the projection area 211 from the information on the position and projection direction of the projector 210 and the projectable angle. Further, the projection direction of the projector 210 may be manually set and drive-controlled by the user in advance, or may be automatically set and drive-controlled by the information processing apparatus 100. Further, the environment recognition process shown in step S103 may be continuously performed or may be performed at regular time intervals. Further, the environment recognition process may be performed at least before the start of image projection (viewing of content), or may be continuously performed during image projection. When the process of environment recognition is continuous, the recognition unit 121 may mainly recognize changes (differences) in the environment.
  • the viewing location determination unit 122 determines an appropriate viewing location (step S106). Specifically, the viewing location determination unit 122 determines a viewing location that can avoid deterioration of visibility due to the appearance of a user's shadow or the like based on the position of the projector 210 as a light source and the position of the projection area 211. decide. For example, the viewing place determination unit 122 determines a place where the direction in which the shadow appears and the line-of-sight direction are different from each other as an appropriate viewing place in consideration of the appearance of the shadow of the user on the extension line of the positions of the projector 210 and the user.
  • the viewing location determination unit 122 determines the outside (periphery) of the projection area 211 as an appropriate viewing location for the user in order to display the content as large as possible and effectively use the projection area 211. You may do it. For example, in the case of the positional relationship as shown in FIG. 2, the viewing place determination unit 122 determines the area E3 shown in the lower left of FIG. 2 as an appropriate viewing place.
  • the recognition unit 121 recognizes the position of the user in the living space based on various sensing data (captured images and depth data) detected by the camera 310 and the distance measuring sensor 320 (step S109).
  • various sensing data captured images and depth data
  • the output control unit 123 determines whether or not it is necessary to guide to the appropriate viewing place determined above (step S112). Specifically, the output control unit 123 determines that guidance is not necessary when the recognized user's position is the appropriate viewing location determined above. On the other hand, if the recognized user's position is not the appropriate viewing place determined above, the output control unit 123 determines that guidance is necessary. For example, in the positional relationship as shown in FIG. 2, when the user enters the room from the vicinity where the projector 210 is installed (area E1), stays in the area E1 and starts viewing the content, a shadow is cast in the line-of-sight direction of the user. Appears and visibility is reduced.
  • the output control unit 123 determines that it is necessary to guide the user to the area E3 determined as an appropriate viewing place. On the other hand, for example, when the user enters the room from the vicinity of the area E3 and stays in the area E3, even if the image is projected on the projection area 211, the shadow of the user is generated on the side opposite to the line-of-sight direction and the visibility does not deteriorate. Therefore, the output control unit 123 determines that guidance is not necessary.
  • the output control unit 123 performs guidance by controlling the display of the shadow image (step S115) or guidance by controlling the content (step S118).
  • the guidance by the display control of the shadow image is mainly used when it is desired to change the physical position in the living space of the user (for example, when it is desired to move from the area E1 to the area E3).
  • the guidance by controlling the content is mainly used when it is desired to change the user's face direction, line-of-sight direction, body direction, posture, etc. without changing the physical position in the user's living space. Which method may be used for guidance may be preset, or both may be performed in parallel or sequentially depending on the situation.
  • the output control unit 123 first moves the user to an appropriate viewing place by controlling the display of the shadow image, and then controls the content to control the face orientation and posture of the user. May be changed.
  • the output control unit 123 sets the shadow image (virtually) in the direction of the user's line of sight (preferably a place in front of the user and within the user's field of view, such as the direction in which the body is facing). Display an example of an object) like a user's shadow (for example, display a human figure image as if the shadow extends from the user's feet). As a result, the user thinks that his / her shadow will be an obstacle when viewing the content displayed on the floor, and is expected to move naturally to another place.
  • the output control unit 123 may also follow the human figure image when the user moves, and may end the display of the human figure image when the user moves to an appropriate viewing place.
  • the human figure image may be a real and finely shaped human figure, or may be a deformed (fine shape omitted) human figure. Since a human figure is a familiar physical phenomenon that a user can experience on a daily basis, by using a human figure image, it is possible to induce the user implicitly without making the user feel unnatural. Further, since it is generally recognized that the human shadow is black or gray, the color of the human shadow image is preferably a color close to black or gray, but the present embodiment is not limited to this, and blue or It may be another color such as green or red.
  • the output control unit 123 is a projection area in the user's line-of-sight direction (which may be the general orientation of the face, the head, or the body).
  • the 211 is controlled to project a human figure image.
  • the output control unit 123 controls the display to gradually reduce the size and / or gradually reduce the size of the human figure image (increase the transmittance) when the user moves in the direction (correct direction) of the appropriate viewing place.
  • the display control is performed so that the human image is gradually enlarged or / and gradually darkened (lowers the transmittance). You may. That is, the output control unit 123 performs display control that changes the display state of the shadow image according to the change in the positional relationship (distance, etc.) between the appropriate viewing place and the user. This makes it possible to more reliably guide the user to an appropriate viewing place implicitly.
  • FIG. 5 shows a diagram illustrating an example of guidance using a human figure image according to the present embodiment.
  • the output control unit 123 receives the output control unit 123.
  • the human figure image 530 is displayed. It should be noted that the actual shadow is generated in a direction different from the line-of-sight direction on the extension line of the projector 210 and the user.
  • the output control unit 123 causes the display position of the human figure image 530 to follow the movement of the user and controls the display so that the human figure image 530 becomes smaller.
  • the user naturally moves in the direction in which the shadow becomes smaller so that the shadow does not interfere with the viewing of the content, so that the user can be implicitly guided to the appropriate viewing place T.
  • the output control unit 123 also moves the user to the display position of the human figure image 530. The display is controlled so that the human figure image 530 becomes larger while following the image.
  • the guidance using the human shadow image is not limited to the guidance in the left-right direction (right or left direction for the user facing the projection area 211) with respect to the projection area 211 as described above, and is, for example, an output.
  • the control unit 123 wants to move the user backward (backward for the user facing the projection area 211)
  • the control unit 123 displays a large shadow in the line-of-sight direction of the user so that the user moves backward (for example,). Display control may be performed to reduce the shadow (as the person walks backward and moves backward).
  • the output control unit 123 when the output control unit 123 wants to move the user in the forward direction (forward for the user who is facing the projection area 211), the output control unit 123 displays a large shadow in the line-of-sight direction of the user and the user moves forward. Display control may be performed to reduce the shadow (the closer to the projection area 211).
  • the output control unit 123 may use silhouette images of various figures such as circles, squares, triangles, ellipses, polygons, trapezoids, rhombuses, and cloud shapes as other examples of virtual objects. Silhouette images with such various shapes are called graphic shadow images.
  • the color of the graphic shadow image may be a color close to black or gray, or may be another color such as blue, green, or red, as in the case of a general human figure.
  • FIG. 6 is a diagram illustrating an example of guidance using a graphic shadow image according to the present embodiment.
  • the output control unit 123 displays a circular graphic shadow image 570a in the line-of-sight direction of the user and in the projection area 211, and when the user is moving in the correct direction, the graphic shadow image 570d, It is possible to control the circular shadow to be displayed smaller and lighter as in 570e, and to display the circular shadow larger and darker as in the graphic shadow images 570b and 570c when moving in the wrong direction. ..
  • the implicit guidance by the display control of the shadow image described above may be performed before the user starts viewing the content, that is, before the content is projected on the projection area 211. Further, even when the content is already projected, a shadow image (virtual object) may be displayed overlaid on the content to implicitly guide the content.
  • the shadow image is displayed in the user's line-of-sight direction (direction in which the body is facing, forward, etc.), when the user is in the projection area 211, the user's line-of-sight direction (face or body direction) is explained.
  • the shadow image may be displayed in the direction), and when the user is outside the projection area 211, the shadow image may be displayed in the area closest to the user in the projection area 211.
  • the output control unit 123 displays some content in the projection area 211 and responds to the user's face orientation, head orientation, line-of-sight orientation, body orientation, posture (standing / sitting / squatting), etc. By changing the display state of the content, it is expected that the user will naturally change the face orientation, posture, etc. in order to see the content well.
  • the output control unit 123 controls the strength of the blurring of the content according to the user's face orientation, head orientation, line-of-sight direction, etc., and the user looks good in focus when viewed from a certain viewpoint (desirable place or direction). You may do so. As a result, the user naturally tilts his / her face or moves to look at it from a different direction or place, so that it is possible to implicitly guide the user to a desired face direction, line-of-sight direction, or place.
  • the strength of blurring is given as an example, but it is not limited to this, and by controlling the saturation and brightness of the content and controlling the transmittance, the user's face orientation etc. are implicitly used. It is also possible to induce.
  • the output control unit 123 when the output control unit 123 wants to make the user sit or crouch so that the actual shadow of the user becomes small, the output control unit 123 naturally makes the user sit by reducing the content displayed (projected) on the floor. It is possible to crouch or crouch. When the content is displayed small, the user is expected to naturally bring his / her face closer to the content so that he / she can see the content closely, and to sit, crouch, or bend down.
  • the output control unit 123 can implicitly guide the user's face direction, line-of-sight direction, posture, etc. by presenting the content using the optical illusion.
  • An example of content using an optical illusion is an image in which some characters or figures can be seen or 2D looks 3D depending on the viewing direction or angle.
  • the output control unit 123 may perform control to display some characters or figures or change 2D to 3D when the user's face direction or posture becomes the desired direction or line of sight (that is,). Display control may be performed to make the image look like an illusion image). As a result, it is expected that the user will naturally adjust the face direction and the line-of-sight direction from the interest in the illusion image.
  • the output control unit 123 controls to localize the sound source in a predetermined direction and output the sound, and implicitly turns the user in an arbitrary direction (direction of the sound source) or directs the line of sight. Guidance can be done.
  • the implicit guidance by controlling the content described above may be performed after the guidance of the physical position movement of the user is performed. Further, the content used for guidance may be content to be viewed by the user (video, still image, website, text, GUI, etc.) or content prepared for guidance.
  • the output control unit 123 may start viewing the content. It should be noted that it may be assumed that the user does not move to an appropriate viewing place even if the guidance control is performed. The output control unit 123 of the content even if the user does not move to an appropriate viewing place (or even if the user moves to a place other than the appropriate viewing place) even if some implicit guidance is performed. You may start viewing.
  • step S112 / No when guidance is not required (step S112 / No), that is, when the user's position is already an appropriate viewing place, the output control unit 123 does not perform guidance and may start viewing the content as it is. good.
  • the start of viewing the content may be, for example, displaying a menu screen or playing a start video. Further, the output control unit 123 responds to predetermined content according to a user's operation (gesture, touch operation to the projection area 211, predetermined switch / button operation, voice input, operation input using a personal terminal such as a smartphone, etc.). You may start watching the content, or you may start watching the content automatically.
  • a user's operation gesture, touch operation to the projection area 211, predetermined switch / button operation, voice input, operation input using a personal terminal such as a smartphone, etc.
  • Second guidance processing example> Next, an example of the second induction process will be described.
  • the projection area 211 exists on the floor surface of the real space, it is assumed that the user walks around in the real space and is located in the projection area 211.
  • the content is displayed and controlled so as to face the user's face orientation and body orientation, a part of the content is blocked by the user himself or the content cannot be displayed in a sufficient size. The visibility and comfort of the user may be reduced.
  • the physical load such as changing the user's face or body orientation is smaller than the guidance with a large physical load such as movement of the user's physical position, and visual recognition is performed in a short time.
  • Guidance that improves sexuality and comfort is preferred.
  • FIG. 7 is a flowchart showing an example of the flow of the second guidance processing example according to the present embodiment.
  • the output control unit 123 controls the projection of the content (step S203).
  • the content projection control may start the reproduction of the predetermined content according to the user operation, or may automatically start the reproduction of the predetermined content.
  • the content is projected on the floor of the living space.
  • the above-mentioned first guidance process may be performed before the projection control of the content.
  • the recognition unit 121 recognizes the environment of the living space (step S206).
  • the environment recognition may be performed before the projection control of the content or after the projection control of the content is started.
  • Examples of the environment recognition of the living space include space recognition, recognition of the projection area 211, recognition of the projection surface, recognition of a shield, and the like.
  • the recognition unit 121 recognizes the user's position (step S209). Further, the recognition unit 121 also recognizes the face orientation, the head orientation, the body orientation, or the line-of-sight direction of the user, and the output control unit 123 controls the orientation of the content so as to face the user orientation. You may do so. Further, the output control unit 123 continuously recognizes the position and orientation of the user, and when the user moves while viewing the content, the output control unit 123 follows the position and orientation of the user and changes the display position and orientation of the content. good. This allows the user to view the content anywhere in the living space.
  • the output control unit 123 acquires the positional relationship between the projector 210, the projection area 211, and the user (step S212).
  • the output control unit 123 determines whether or not there is a high possibility of a decrease in visibility based on the above positional relationship (step S215). For example, when the user is located in the projection area 211, the visibility may be deteriorated depending on the orientation of the user and the position of the projector 210.
  • FIG. 8 is a diagram illustrating a decrease in visibility when the user is located within the projection area 211. As shown on the left side of FIG. 8, when the light source direction of the projector 210 and the line-of-sight direction of the user are substantially the same direction, the light hits the back surface of the user, so that the user's visual field range S is shaded by the user and the visibility is improved. It is likely to decrease.
  • shadows are: It occurs on the back side of the user, on the extension of the projector 210 and the user's position), and the possibility of deterioration of visibility is low.
  • the display area of the content may be narrow and the content may have to be reduced and displayed.
  • Whether or not there is a high possibility of deterioration in visibility may be determined based on whether or not the positional relationship between the projector 210, the projection area 211, and the user satisfies a predetermined condition. For example, when the positions of the projector 210 and the projection area 211 are located, the position in the projection area 211 where the user is likely to have a high possibility of deterioration in visibility may be set in advance. Alternatively, the control unit 120 may acquire a positional relationship with a high possibility of deterioration in visibility by machine learning.
  • step S215 when it is determined that there is a high possibility that the visibility is deteriorated (step S215 / Yes), the output control unit 123 unspecifiedly changes the face orientation and body orientation of the user by controlling the content being viewed by the user. Guidance is performed to improve visibility (step S218). A specific example of implicit guidance by controlling the content will be described below with reference to FIG.
  • FIG. 9 is a diagram illustrating implicit guidance by controlling the content being viewed in the second guidance processing example.
  • the user is located in the projection area 211 as shown on the left side of FIG. 9, or the user is likely to walk in the room and reach the position shown on the left side of FIG.
  • a shadow appears in the line-of-sight direction of the user and interferes with the viewing of the content 500 (the shadow overlaps the content 500), and the visibility is lowered. Therefore, the output control unit 123 controls the content 500 to implicitly guide the user's face orientation, posture, and the like to an arbitrary direction and posture to improve visibility.
  • the output control unit 123 controls to move the content 500 to a place different from the place where the user's shadow appears (that is, the place where the user's shadow does not overlap). ..
  • the user can continue viewing in a direction in which his / her shadow does not get in the way by simply changing the direction of the body without physical load such as physical movement. It can be done and visibility is improved.
  • the movement of the content 500 is not limited to rotation, and may be control such as shifting the content 500 in the lateral direction (an example of planar movement control).
  • the content 500 may be shifted laterally.
  • the output control unit 123 may be moved to a wall instead of the floor for display (an example of three-dimensional movement control). For example, the output control unit 123 moves the content 500 to the lateral wall of the user so that the user does not have a physical load such as physical movement and simply changes the direction of the body so that his / her shadow does not get in the way. Viewing can be continued and visibility is improved.
  • the output control unit 123 is a place different from the place where the shadow of the user appears, and the user does not have a physical load such as physical movement, and simply changes the direction of the body and the direction of the line of sight of the user. It is possible to move the content 500 to a place where viewing can be continued in a direction in which the shadow does not get in the way, and improve the visibility.
  • the output control unit 123 naturally squats the user by performing display control for reducing the content 500, and improves visibility by reducing the shadow area. It is also possible to make it.
  • the display control of the content 500 for squatting the user is an example, and the present embodiment is not limited to this.
  • the content control and user-induced actions are not limited to the above examples.
  • the user can be raised by controlling the enlargement of the content, or the user's face, head, and line of sight can be changed by controlling the content to move from the floor to the wall or ceiling (an example of three-dimensional movement control). It is possible to do.
  • the present embodiment is not limited to controlling the content being viewed, and for example, the user can view the content while the user is located in the projection area 211.
  • the user's line-of-sight direction, face orientation, body orientation, posture, etc. may be implicitly guided by controlling the projected content.
  • FIG. 10 is a diagram illustrating a first control according to a modified example. As shown on the left side of FIG. 10, first, it is assumed that the user B comes in later while the user A is viewing or operating the content 502 in the living space. At this time, if there is an empty space in the projection area 211, the viewing place determination unit 122 determines an appropriate viewing place for the user B on the premise that new content is arranged in the empty space.
  • the location and orientation of new content and the appropriate viewing location for user B are determined in consideration of the position of the projector 210 and the display position of the content 502 that has already been displayed in the projection area 211 and is being viewed by user A. Can be done. Specifically, it is desirable that the place does not interfere with the viewing of the user B and the user A due to the shadow of the user B, and that the new content to be viewed from that place can be efficiently displayed in a large size. For example, in the example shown on the left side of FIG. 10, the viewing place T located on the right side of the projection area 211 with respect to the projector 210 is determined as an appropriate viewing place.
  • the output control unit 123 implicitly guides the user B to an appropriate viewing place T.
  • the guidance method include a guidance method using a shadow image as described with reference to FIGS. 5 and 6. Then, as shown on the right side of FIG. 10, the user B starts viewing and operating the new content 504 without deteriorating the visibility due to his / her shadow and without disturbing the viewing of the user A. Is possible.
  • FIG. 11 is a diagram illustrating a second control according to a modified example of the present embodiment.
  • the viewing location determination unit 122 reduces the content 502 and secures a free space for the user B, so that the user B can properly view the content. Decide on a location.
  • the location and orientation of the new content, and the appropriate viewing location for user B are the position of the projector 210 and the display position (securing empty space) of the content 502 that has already been displayed in the projection area 211 and is being viewed by user A.
  • the place does not interfere with the viewing of the user B and the user A due to the shadow of the user B, and that the new content to be viewed from that place can be efficiently displayed in a large size.
  • the viewing place T located on the right side of the projection area 211 with respect to the projector 210 is determined as an appropriate viewing place.
  • the user A who is watching does not have a physical load such as a large movement of the physical position. Therefore, for example, display control such as reducing the content 502 closer to the user A and displaying it, rotating the content 502 around the center of the user A, or moving the content 502 laterally with respect to the user A can be mentioned. Further, display control for moving the content 502 to a wall where the position of the user A can be seen can be mentioned.
  • the output control unit 123 implicitly guides the user B to an appropriate viewing place T.
  • the guidance method include a guidance method using a shadow image as described with reference to FIGS. 5 and 6.
  • the output control unit 123 may reduce the content 502 and secure an empty space before performing the guidance control.
  • the output control unit 123 displays the new content 504 in the empty space secured by reducing the content 502.
  • the user B guided to the appropriate viewing place T can start viewing or operating the new content 504 without deteriorating the visibility due to his / her shadow and without disturbing the viewing of the user A. It will be possible.
  • the content display control when there are multiple users is not limited to the example described above.
  • the output control unit 123 reduces and displays the content 502 and arranges the new content 504 in the reserved empty space.
  • the recognition unit 121 recognizes the position of the user B who has moved to view the content 504, and the shadow of the user B does not overlap the content 503 or the content 504 due to the positional relationship between the projector 210, the user B, and each content.
  • the output control unit 123 may perform an implicit guidance by displaying the virtual object or controlling the content as described above.
  • the position of the user B, the face orientation, the line-of-sight direction, the posture, and the like may be changed to optimize the position of the user B and the display of the content.
  • the position of each user and the display position of the content can be controlled as appropriate based on the relationship between the position of each user, the position of the projector 210 (light source), and the position of each content. It is possible to prevent the visibility from being lowered due to the shadow of each user.
  • FIG. 12 is a diagram illustrating a third control according to a modified example of the present embodiment.
  • the viewing place determination unit 122 is appropriate from the position of the projector 210 and the position of the projection area 211, respectively.
  • the viewing locations T1 to T4 are determined, and each user is implicitly guided by using a shadow image or the like. Then, as shown on the right side of FIG.
  • the contents 506a to 506d are viewed at a position where all the users do not interfere with the viewing of other users by their own shadows, respectively, in a size that maximizes the projection area 211. Is possible, and visibility and comfort are improved.
  • the contents 506a to 506d may be different contents for each user or may be the same contents. For example, during a team discussion, it is assumed that the same content will be shared and viewed individually.
  • the positions of the appropriate viewing locations T1 to T4 are not limited to the example shown in FIG. 12, and the appropriate viewing locations T1 to T4 may be within the projection area 211.
  • the projection area 211 exists on the floor surface
  • the present disclosure is not limited to this, and there is a case where the projection area 211 exists on a wall, a ceiling, a table, or the like. You may.
  • a plurality of contents may be projected by a plurality of projectors provided in the living space.
  • the viewing location determination unit 122 determines an appropriate viewing location in consideration of the positions of the plurality of projectors.
  • each step of the flowchart shown in FIGS. 4 and 7 may be processed in parallel as appropriate, or may be processed in the reverse order.
  • the viewing place determination unit 122 may calculate a place where a shadow is generated in consideration of a light source other than the projector 210, such as a lighting device installed in a living space, and determine an appropriate viewing place.
  • the display device for displaying the shadow image (virtual object) for performing implicit guidance a display device different from the display device for displaying the content may be used, or the same display device may be used.
  • virtual objects and contents are examples of images displayed in real space.
  • the output device 200 may be, for example, a glasses-type device provided with a transmissive display or an HMD worn on the user's head. In such a display device, the content may be superimposed and displayed in real space (AR (Augmented Reality) display). Further, the output control unit 123 implicitly guides the user and moves it to an arbitrary place by displaying a virtual object such as a human figure image in AR or controlling the display state of the content, or in an arbitrary direction.
  • AR Augmented Reality
  • An arbitrary place can be determined by the viewing place determination unit 122, for example, the shape of a real space, the arrangement of furniture, the position and size of a flat area, the content to be displayed, the position of a user or another user in the real space, and the real space. It may be determined based on the positional relationship between the light source of the lighting device and the like arranged in the space, the lighting environment, and the like. Further, the position and posture of the content superimposed and displayed in the real space may be associated with the floor surface and the wall in the real space.
  • the display device that AR-displays the content in the real space may be a smartphone, a tablet terminal, or the like. Further, even if it is a non-transparent display device, the captured image in real space is displayed on the display unit in real time (so-called through image display, also referred to as live view), and the content is superimposed and displayed on the captured image. Therefore, AR display can be realized.
  • the output device 200, or the sensor 300 described above exhibit the functions of the information processing device 100, the output device 200, or the sensor 300. It is also possible to create one or more computer programs of. Also provided is a computer-readable storage medium that stores the one or more computer programs.
  • the present technology can also have the following configurations.
  • a display device performs display control that implicitly guides the user to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user.
  • An information processing device provided with a control unit for performing.
  • the control unit determines the specific viewing place based on the position of the display area and the position of the light source in the real space, and if the position of the user is not the specific viewing place, the user is selected.
  • the information processing device according to (1) above which controls display to implicitly guide the user to the specific viewing location.
  • the information processing device according to (2), wherein the position of the light source includes the position of the display device that projects an image into the real space.
  • the information processing device according to (3) above, wherein the display area includes a range in which an image can be projected by the display device.
  • the control unit determines a place outside or inside the display area where the shadow appearing on the extension line of the light source and the user is different from the line-of-sight direction of the user as the specific viewing place.
  • the information processing apparatus according to any one of (2) to (4) above.
  • the control unit controls to display the virtual object included in the image in front of the user as the display control for implicitly guiding the image, according to any one of (1) to (5).
  • the information processing device according to any one of (6) to (10) above, wherein the virtual object is a shadow image of a person or a figure.
  • the control unit changes the display state of the content to be viewed by the user, which is included in the image, as the display control for implicitly guiding the user, according to any one of (1) to (5).
  • the information processing device described in. (13) The information processing device according to (12), wherein the control unit changes the display state of the content according to the face orientation or location of the user with respect to the content.
  • the control unit sets the position of the display area, the position of the display device that projects the content onto the display area, and the position of the user who views the content as the display control that is implicitly guided.
  • the information processing apparatus Based on the above (12), the information processing apparatus according to (12), wherein the content is controlled to be moved to a place in the display area that does not overlap with a shadow appearing on the extension line of the display device and the user.
  • the control unit recognizes a plurality of users from the real space, each user views each other's content based on the position of the display area and the position of the display device that projects the content on the display area.
  • the information processing device according to any one of (1) to (14) above, which controls to implicitly guide each user to each specific viewing place that does not interfere.
  • the control unit recognizes a new user from the real space, the position of the first user who is already viewing the content in the real space and the content being viewed by the first user.
  • the information processing apparatus according to any one of (1) to (15) above.
  • the processor A display device performs display control that implicitly guides the user to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user. Information processing methods, including what to do.
  • Computer A display device performs display control that implicitly guides the user to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user.
  • a program that functions as a control unit.
  • Information processing device 110 I / F unit 120 Control unit 121 Recognition unit 122 Viewing location determination unit 123 Output control unit 130 Input unit 140 Storage unit 200 Display device 210 Projector 300 Sensor 310 Camera 320 Distance measurement sensor

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

[Problem] To provide an information processing device, an information processing method, and a program that make it possible to use non-explicit guidance to improve visibility without imparting a sense of coercion. [Solution] An information processing device that comprises a control unit that uses a display device to perform display control that non-explicitly guides a user to a specific viewing place in an actual space on the basis of the location of a display region for an image recognized from sensing data for the actual space and the location of the user.

Description

情報処理装置、情報処理方法、およびプログラムInformation processing equipment, information processing methods, and programs
 本開示は、情報処理装置、情報処理方法、およびプログラムに関する。 This disclosure relates to information processing devices, information processing methods, and programs.
 近年、プロジェクタの小型化が進み、会議室や居住空間に設置して利用するケースが多く見受けられるようになった。また、プロジェクタで投影されたUI(ユーザインタフェース)をユーザが操作するインタラクティブな活用方法も提案されている。例えば机やテーブル、壁やソファー等、日常生活におけるあらゆる場所に画像を投影してタッチパネルのようにインタラクティブに用いることが可能である。 In recent years, projectors have become smaller and smaller, and there are many cases where they are installed and used in conference rooms and living spaces. In addition, an interactive utilization method in which the user operates the UI (user interface) projected by the projector has also been proposed. For example, it is possible to project an image on a desk, a table, a wall, a sofa, or any other place in daily life and use it interactively like a touch panel.
 例えば下記特許文献1では、投影環境を改善するための行動をユーザに促す技術について開示されている。例えば、下記特許文献1に記載の技術では、複数の候補から適切な投影場所を決定し、当該投影場所にユーザの視線を誘導したり、また、投影場所に置かれた障害物をユーザに排除させたり等の行動誘引が行われる。 For example, Patent Document 1 below discloses a technique for encouraging a user to take an action to improve the projection environment. For example, in the technique described in Patent Document 1 below, an appropriate projection location is determined from a plurality of candidates, the user's line of sight is guided to the projection location, and obstacles placed at the projection location are excluded from the user. Action attraction such as letting is performed.
特開2019-36181号公報JP-A-2019-36181
 しかしながら、上記特許文献1では、ユーザに所定の行動を促す際に、一方的な指示を明示的に行っており、ユーザに強制感を与えてしまう場合がある。 However, in the above-mentioned Patent Document 1, when urging the user to take a predetermined action, a one-sided instruction is explicitly given, which may give a feeling of coercion to the user.
 そこで、本開示では、非明示的な誘導により、強制感を与えずに視認性を向上させることが可能な情報処理装置、情報処理方法、およびプログラムを提案する。 Therefore, in this disclosure, we propose an information processing device, an information processing method, and a program that can improve visibility without giving a feeling of coercion by implicit guidance.
 本開示によれば、実空間のセンシングデータから認識される画像の表示領域の位置と、ユーザの位置と、に基づいて、前記実空間における特定の視聴場所に前記ユーザを非明示的に誘導する表示制御を表示装置により行う制御部を備える、情報処理装置を提案する。 According to the present disclosure, the user is implicitly guided to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user. We propose an information processing device equipped with a control unit that performs display control by a display device.
 本開示によれば、プロセッサが、実空間のセンシングデータから認識される画像の表示領域の位置と、ユーザの位置と、に基づいて、前記実空間における特定の視聴場所に前記ユーザを非明示的に誘導する表示制御を表示装置により行うことを含む、情報処理方法を提案する。 According to the present disclosure, the processor implicitly places the user in a particular viewing location in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user. We propose an information processing method that includes performing display control that guides the user to a display device.
 本開示によれば、コンピュータを、実空間のセンシングデータから認識される画像の表示領域の位置と、ユーザの位置と、に基づいて、前記実空間における特定の視聴場所に前記ユーザを非明示的に誘導する表示制御を表示装置により行う制御部として機能させる、プログラムを提案する。 According to the present disclosure, the computer implicitly places the user in a particular viewing location in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user. We propose a program that functions as a control unit that performs display control guided by the display device.
本開示の一実施形態による情報処理システムの概要について説明する図である。It is a figure explaining the outline of the information processing system by one Embodiment of this disclosure. 投影画像の視聴に適した場所について説明する図である。It is a figure explaining the place suitable for viewing a projection image. 本実施形態に係る情報処理システムの構成の一例を示す図である。It is a figure which shows an example of the structure of the information processing system which concerns on this embodiment. 本実施形態による第1の誘導処理例の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of the 1st guidance processing example by this embodiment. 本実施形態による人影画像を用いた誘導の一例について説明する図を示す。The figure explaining an example of the guidance using the human figure image by this embodiment is shown. 本実施形態による図形影画像を用いた誘導の一例について説明する図を示す。The figure explaining an example of the guidance using the figure shadow image by this embodiment is shown. 本実施形態による第2の誘導処理例の流れの一例を示すフローチャートである。It is a flowchart which shows an example of the flow of the 2nd induction processing example by this embodiment. 本実施形態によるユーザが投影領域内に位置する場合における視認性の低下について説明する図である。It is a figure explaining the deterioration of visibility when the user is located in the projection area by this embodiment. 第2の誘導処理例における視聴中のコンテンツの制御による非明示的な誘導について説明する図である。It is a figure explaining the implicit guidance by the control of the content being viewed in the 2nd guidance processing example. 本実施形態の変形例による第1の制御について説明する図である。It is a figure explaining the 1st control by the modification of this embodiment. 本実施形態の変形例による第2の制御について説明する図である。It is a figure explaining the 2nd control by the modification of this embodiment. 本実施形態の変形例による第3の制御について説明する図である。It is a figure explaining the 3rd control by the modification of this embodiment.
 以下に添付図面を参照しながら、本開示の好適な実施の形態について詳細に説明する。なお、本明細書及び図面において、実質的に同一の機能構成を有する構成要素については、同一の符号を付することにより重複説明を省略する。 The preferred embodiments of the present disclosure will be described in detail with reference to the accompanying drawings below. In the present specification and the drawings, components having substantially the same functional configuration are designated by the same reference numerals, so that duplicate description will be omitted.
 また、説明は以下の順序で行うものとする。
 1.本開示の一実施形態による情報処理システムの概要
 2.構成例
  2-1.出力装置200
  2-2.センサ300
  2-3.情報処理装置100
 3.動作処理例
  3-1.第1の誘導処理例
  3-2.第2の誘導処理例
 4.変形例
 5.補足
In addition, the explanation shall be given in the following order.
1. 1. Outline of information processing system according to one embodiment of the present disclosure 2. Configuration example 2-1. Output device 200
2-2. Sensor 300
2-3. Information processing device 100
3. 3. Operation processing example 3-1. First example of induction processing 3-2. Second example of induction processing 4. Modification example 5. supplement
 <<1.本開示の一実施形態による情報処理システムの概要>>
 図1は、本開示の一実施形態による情報処理システムの概要について説明する図である。図1に示すように、本実施形態による情報処理システムは、プロジェクタ210、カメラ310、および情報処理装置100を含む。
<< 1. Outline of information processing system according to one embodiment of the present disclosure >>
FIG. 1 is a diagram illustrating an outline of an information processing system according to an embodiment of the present disclosure. As shown in FIG. 1, the information processing system according to the present embodiment includes a projector 210, a camera 310, and an information processing device 100.
 プロジェクタ210は、実空間の任意の場所に画像を投影(表示)する表示装置である。例えばプロジェクタ210は、会議室や居住空間の床、壁、机、テーブル、ソファー等の任意の場所に画像を投影する。なお、プロジェクタ210は、出力装置200(図3)の一例である。また、プロジェクタ210は、投影画角に含まれる実空間に画像を投影する。投影する画像は、情報処理装置100から出力される。投影画角は投影可能な範囲を意味し、本明細書では、「投影領域」とも称される。なお、投影領域は、表示領域の一例である。 The projector 210 is a display device that projects (displays) an image at an arbitrary place in real space. For example, the projector 210 projects an image on an arbitrary place such as a floor, a wall, a desk, a table, or a sofa in a conference room or a living space. The projector 210 is an example of the output device 200 (FIG. 3). Further, the projector 210 projects an image in the real space included in the projection angle of view. The image to be projected is output from the information processing apparatus 100. The projected angle of view means a projectable range, and is also referred to as a "projection area" in the present specification. The projection area is an example of a display area.
 また、投影領域は、プロジェクタ210の位置、投影方向、及び投影方向を中心軸とする投影可能な範囲の角度により定義される。プロジェクタ210により投影される画像は、投影画像とも称される。なお、投影画像は、表示画像の一例である。 Further, the projection area is defined by the position of the projector 210, the projection direction, and the angle of the projectable range about the projection direction as the central axis. The image projected by the projector 210 is also referred to as a projected image. The projected image is an example of a display image.
 図1に示す例では、実空間の床面に投影領域211がある場合を想定する。なお、本実施形態はこれに限定されず、実空間の壁や天井に投影領域211があってもよい。また、床面や壁など、複数の場所に投影領域211があってもよい(投影領域211の範囲が、床面や壁など複数の場所を含む広さであってもよい)。プロジェクタ210は任意の方向に駆動する機構(例えばパン・チルト機構)を有していてもよい。また、複数のプロジェクタ210が実空間に設けられていてもよい。複数のプロジェクタ210により、より広い範囲を投影領域211とすることが可能である。 In the example shown in FIG. 1, it is assumed that the projection area 211 is on the floor surface in the real space. The present embodiment is not limited to this, and the projection area 211 may be provided on the wall or ceiling of the real space. Further, the projection area 211 may be provided in a plurality of places such as a floor surface and a wall (the range of the projection area 211 may be a width including a plurality of places such as a floor surface and a wall). The projector 210 may have a mechanism for driving in any direction (for example, a pan / tilt mechanism). Further, a plurality of projectors 210 may be provided in the real space. With the plurality of projectors 210, it is possible to set a wider range as the projection area 211.
 カメラ310は、実空間を撮像する撮像装置である。カメラ310は、RGB(赤緑青)カメラ、IR(赤外線)カメラ等の、レンズ系、駆動系、及び撮像素子を有し、画像(静止画像又は動画像)を撮像する。なお、カメラ310は、センサ300(図3)の一例である。 The camera 310 is an imaging device that captures images in real space. The camera 310 has a lens system, a drive system, and an image pickup element such as an RGB (red, green, blue) camera, an IR (infrared) camera, etc., and captures an image (still image or moving image). The camera 310 is an example of the sensor 300 (FIG. 3).
 また、カメラ310は、撮像画角311内に含まれる実空間を撮像する。撮像画角311は、撮像可能な範囲を意味し、カメラ310の設置位置、撮像方向、及び撮像方向を中心軸とする撮像可能な範囲の角度により定義される。カメラ310により撮像された画像は、撮像画像とも称される。本実施形態によるカメラ310の撮像画角311は、少なくとも投影領域211を含む範囲としてもよい。また、撮像画角311は、実空間全体を含む範囲であってもよい。カメラ310は、任意の方向に駆動する機構(例えばパン・チルト機構)を有していてもよい。また、カメラ310をプロジェクタ210に固定し、カメラ310の撮像方向をプロジェクタ210の投影方向に合わせるようにしてもよい。また、カメラ310は、プロジェクタ210と異なる位置に設けられていてもよい。また、複数のカメラ310が実空間に設けられていてもよい。複数のカメラ310により、より広い範囲を撮像画角311とすることが可能である。カメラ310により撮像された撮像画像は、情報処理装置100に出力される。 Further, the camera 310 captures the real space included in the imaging angle of view 311. The image pickup angle of view 311 means an image pickup range, and is defined by the installation position of the camera 310, the image pickup direction, and the angle of the image pickup range centered on the image pickup direction. The image captured by the camera 310 is also referred to as a captured image. The image pickup angle of view 311 of the camera 310 according to the present embodiment may be a range including at least the projection area 211. Further, the imaged angle of view 311 may be a range including the entire real space. The camera 310 may have a mechanism for driving in any direction (for example, a pan / tilt mechanism). Further, the camera 310 may be fixed to the projector 210 so that the imaging direction of the camera 310 is aligned with the projection direction of the projector 210. Further, the camera 310 may be provided at a position different from that of the projector 210. Further, a plurality of cameras 310 may be provided in the real space. With the plurality of cameras 310, it is possible to set the imaging angle of view 311 in a wider range. The captured image captured by the camera 310 is output to the information processing apparatus 100.
 情報処理装置100は、実空間に配置されたプロジェクタ210およびカメラ310と通信接続し、プロジェクタ210による実空間への画像の投影の制御を行ったり、カメラ310により撮像された撮像画像を取得したりする。情報処理装置100は、カメラ310から取得した撮像画像に基づいて、実空間の認識(空間認識)を行い得る。 The information processing apparatus 100 communicates with a projector 210 and a camera 310 arranged in a real space, controls the projection of an image into the real space by the projector 210, and acquires an image captured by the camera 310. do. The information processing apparatus 100 can perform real space recognition (spatial recognition) based on the captured image acquired from the camera 310.
 また、図1には図示していないが、実空間には1以上のスピーカがさらに設けられていてもよい。情報処理装置100は、スピーカと通信接続し、スピーカからの音声出力の制御を行う。スピーカは、出力装置200(図3)の一例である。スピーカは、指向性スピーカであってもよい。また、スピーカは、プロジェクタ210と一体化したユニットであってもよいし、実空間内においてプロジェクタ210とは別の場所に配置されてもよいし、スマートフォンや携帯電話など個人端末に設けられてもよい。 Further, although not shown in FIG. 1, one or more speakers may be further provided in the real space. The information processing apparatus 100 communicates with the speaker and controls the audio output from the speaker. The speaker is an example of the output device 200 (FIG. 3). The speaker may be a directional speaker. Further, the speaker may be a unit integrated with the projector 210, may be arranged in a place different from the projector 210 in the real space, or may be provided in a personal terminal such as a smartphone or a mobile phone. good.
 (課題の整理)
 ここで、ユーザが投影領域211に投影された画像(コンテンツとも称する)を視聴する際、ユーザの位置によっては、コンテンツ視聴の視認性や快適性が低下する場合がある。例えば、プロジェクタ210と投影領域211の間にユーザが位置した場合、ユーザの背面からプロジェクタ210の光が照射され、ユーザの視線方向にユーザの影が生じてコンテンツ(投影画像)の視聴の邪魔になることが想定される。以下、図2を参照して具体的に説明する。
(Arrangement of issues)
Here, when the user views the image (also referred to as content) projected on the projection area 211, the visibility and comfort of viewing the content may deteriorate depending on the position of the user. For example, when the user is located between the projector 210 and the projection area 211, the light of the projector 210 is emitted from the back of the user, and the shadow of the user is generated in the direction of the user's line of sight, which interferes with the viewing of the content (projected image). Is expected to be. Hereinafter, a specific description will be given with reference to FIG.
 図2は、投影画像の視聴に適した場所について説明する図である。図2に示す例では、投影画像の向きがユーザの身体に向きに正対するよう表示制御される場合を前提とする。ユーザの影は、プロジェクタ210(光源)とユーザの延長線上に出現する。したがって、例えば図2左上に示すようにプロジェクタ210と投影領域211の間(エリアE1)にユーザが位置した場合、ユーザの視線方向と同方向に影が生じ、投影画像500aの視聴の邪魔となる。一方、図2右上に示す位置関係では、プロジェクタ210とユーザの間に投影領域211が位置するため、ユーザの影は、ユーザの視線方向とは異なる方向(視野領域外)に生じ、投影画像500bの視聴の邪魔とはならない。しかし、ユーザの位置が、投影領域211の短手方向側のエリアE2であるため、投影画像500bが縮小して表示され、視認性が低下し得る。 FIG. 2 is a diagram illustrating a place suitable for viewing a projected image. In the example shown in FIG. 2, it is assumed that the display is controlled so that the orientation of the projected image faces the user's body. The shadow of the user appears on the projector 210 (light source) and the extension line of the user. Therefore, for example, when the user is located between the projector 210 and the projection area 211 (area E1) as shown in the upper left of FIG. 2, a shadow is generated in the same direction as the user's line of sight, which hinders the viewing of the projected image 500a. .. On the other hand, in the positional relationship shown in the upper right of FIG. 2, since the projection area 211 is located between the projector 210 and the user, the shadow of the user is generated in a direction different from the line-of-sight direction of the user (outside the visual field area), and the projected image 500b. It does not interfere with the viewing of. However, since the position of the user is the area E2 on the lateral side of the projection area 211, the projected image 500b may be reduced and displayed, and the visibility may be reduced.
 また、図2右下に示すように、ユーザが投影領域211内(エリアE4)に位置する場合において、視線方向に影が生じない位置及び身体の向きであっても、投影領域211の広さを最大限に利用できずに投影画像500dが縮小して表示される場合があり、視認性が低下する恐れがある。特に居住空間などで画像を投影する場合、ユーザは様々な位置に歩いて移動することが想定される。 Further, as shown in the lower right of FIG. 2, when the user is located in the projection area 211 (area E4), the size of the projection area 211 is wide even if the position does not cause a shadow in the line-of-sight direction and the body orientation. May not be fully utilized and the projected image 500d may be reduced and displayed, which may reduce visibility. Especially when projecting an image in a living space, it is assumed that the user walks to various positions.
 これらに対し、図2左下に示すように、投影領域211を挟んでプロジェクタ210と対向する位置であって、かつ投影領域211の長手方向側のエリアE3の場合、ユーザの視線方向には影が生じず、さらに、投影画像500cが投影領域211の広さを最大限に利用した大きさで表示されるため、ユーザの視認性が高くなる。すなわち、図2に示すような位置関係の場合、エリアE3が、ユーザの視聴場所として最も好ましい場所と言える。 On the other hand, as shown in the lower left of FIG. 2, in the case of the area E3 facing the projector 210 across the projection area 211 and on the longitudinal side of the projection area 211, a shadow is cast in the line-of-sight direction of the user. Further, since the projected image 500c is displayed in a size that maximizes the area of the projection area 211, the visibility of the user is improved. That is, in the case of the positional relationship as shown in FIG. 2, it can be said that the area E3 is the most preferable place for the user to watch.
 したがって、コンテンツを提示する際、ユーザを好ましい適切な視聴場所(図2に示す例ではエリアE3)に誘導して視認性を向上させることが望まれるが、上記従来技術のように、システムから一方的に明示的な指示を行うと、ユーザに強制感や監視されているような不快感を与えてしまう恐れがある。 Therefore, when presenting the content, it is desired to guide the user to a preferable appropriate viewing place (area E3 in the example shown in FIG. 2) to improve the visibility. Explicit instructions can give the user a sense of coercion or discomfort as if they were being monitored.
 そこで、本開示では、非明示的な誘導により、強制的な印象を与えずに視認性を向上させることを可能とする仕組みを提案する。 Therefore, in this disclosure, we propose a mechanism that makes it possible to improve visibility without giving a compulsory impression by implicit guidance.
 以下、本実施形態による情報処理システムの構成と動作処理例について順次説明する。 Hereinafter, the configuration and operation processing example of the information processing system according to the present embodiment will be sequentially described.
 <<2.構成例>>
 図3は、本実施形態に係る情報処理システムの構成の一例を示す図である。図3に示すように、本実施形態に係る情報処理システムは、情報処理装置100、出力装置200、およびセンサ300を含む。
<< 2. Configuration example >>
FIG. 3 is a diagram showing an example of the configuration of the information processing system according to the present embodiment. As shown in FIG. 3, the information processing system according to the present embodiment includes an information processing device 100, an output device 200, and a sensor 300.
 <2-1.出力装置200>
 出力装置200は、情報処理装置100から受信した情報を、実空間においてユーザに提示する装置である。出力装置200は、例えばプロジェクタ210により実現される。プロジェクタ210は、表示装置の一例である。また、出力装置200は、図3には図示していないが、他にも例えば音声出力装置(スピーカ)や、照明装置、振動装置、風出力装置、空調装置、各種アクチュエータ等、何らかの情報をユーザに提示し得る装置をさらに含んでいてもよい。また、空間内に複数の出力装置200が存在していてもよい。
<2-1. Output device 200>
The output device 200 is a device that presents the information received from the information processing device 100 to the user in the real space. The output device 200 is realized by, for example, a projector 210. The projector 210 is an example of a display device. Further, although the output device 200 is not shown in FIG. 3, the user can obtain some information such as an audio output device (speaker), a lighting device, a vibration device, a wind output device, an air conditioning device, various actuators, and the like. It may further include a device that can be presented in. Further, a plurality of output devices 200 may exist in the space.
 また、プロジェクタ210は、例えば駆動機構を有し、任意の方向へ投影可能な装置であってもよい。このような機構を有することで、1カ所だけでなく、様々な箇所に画像を投影することができる。 Further, the projector 210 may be, for example, a device having a drive mechanism and capable of projecting in any direction. By having such a mechanism, it is possible to project an image not only at one place but also at various places.
 また、プロジェクタ210は、表示以外の出力が可能な構成要素を含んでいてもよい。例えば、プロジェクタ210にスピーカなどの音出力装置を組み合わせてもよい。スピーカは、単一の方向に指向性を形成可能な単一指向性スピーカであってもよい。単一指向性スピーカは、例えばユーザが居る方向に音声を出力する。 Further, the projector 210 may include a component capable of output other than the display. For example, the projector 210 may be combined with a sound output device such as a speaker. The speaker may be a unidirectional speaker capable of forming directivity in a single direction. The unidirectional speaker outputs sound in the direction in which the user is, for example.
 なお、プロジェクタ210は表示装置の一例であるが、出力装置200に含まれる表示装置として、プロジェクタ210以外の表示装置が用いられてもよい。例えば、表示装置は、実空間の床や壁、テーブル等に設けられたディスプレイであってもよい。また、表示装置は、ユーザのタッチ操作を検出し得るタッチパネルディスプレイであってもよい。また、表示装置は、実空間に配置されたTV装置であってもよい。また、表示装置は、ユーザが身に着けるデバイスであってもよい。ユーザが身に着けるデバイスとは、例えば、透過ディスプレイが設けられたメガネ型デバイスや、ユーザの頭部に装着されるHMD(Head Mounted Display)等のウェアラブルデバイスであってもよい。また、表示装置は、スマートフォン、タブレット端末、携帯電話、PC(パーソナルコンピュータ)等の個人端末であってもよい。 Although the projector 210 is an example of a display device, a display device other than the projector 210 may be used as the display device included in the output device 200. For example, the display device may be a display provided on a floor, a wall, a table, or the like in a real space. Further, the display device may be a touch panel display capable of detecting a user's touch operation. Further, the display device may be a TV device arranged in a real space. Further, the display device may be a device worn by the user. The device worn by the user may be, for example, a glasses-type device provided with a transmissive display or a wearable device such as an HMD (Head Mounted Display) worn on the user's head. Further, the display device may be a personal terminal such as a smartphone, a tablet terminal, a mobile phone, or a PC (personal computer).
 <2-2.センサ300>
 センサ300は、実空間に設けられ、実空間から環境やユーザに関する情報を検知し、検知した情報(センシングデータ)を情報処理装置100に出力する。具体的には、センサ300は、実空間の3次元情報(実空間の形状や、家具等の実物体の配置や形状等)、投影領域の情報(大きさや場所)、投影面の状態(粗さや材質、色など)、照度環境、および音量等の環境情報をセンシングする。また、センサ300は、ユーザの有無、人数、位置、姿勢、視線方向、顔の向き、および手指のジェスチャ等のユーザに関する情報をセンシングする。センサ300は、単一若しくは複数であってもよい。また、センサ300は、出力装置200に設けられていてもよい。
<2-2. Sensor 300>
The sensor 300 is provided in the real space, detects information about the environment and the user from the real space, and outputs the detected information (sensing data) to the information processing apparatus 100. Specifically, the sensor 300 has three-dimensional information in the real space (shape of the real space, arrangement and shape of a real object such as furniture, etc.), information in the projection area (size and location), and state of the projection surface (coarse). Sensing environmental information such as pod material, color, etc.), illuminance environment, and volume. Further, the sensor 300 senses information about the user such as the presence / absence of the user, the number of people, the position, the posture, the line-of-sight direction, the direction of the face, and the gesture of the fingers. The sensor 300 may be single or plural. Further, the sensor 300 may be provided in the output device 200.
 本実施形態によるセンサ300は、図3に示すように、例えばカメラ310、および測距センサ320により実現される。カメラ310および測距センサ320は、実空間の天井や壁、テーブル等に設置されてもよいし、ユーザが身に着けていてもよい。また、カメラ310および測距センサ320はそれぞれ複数設けられていてもよい。 As shown in FIG. 3, the sensor 300 according to the present embodiment is realized by, for example, a camera 310 and a distance measuring sensor 320. The camera 310 and the distance measuring sensor 320 may be installed on a ceiling, a wall, a table, or the like in a real space, or may be worn by a user. Further, a plurality of cameras 310 and a plurality of distance measuring sensors 320 may be provided.
 カメラ300は、空間内に居る1以上のユーザや投影領域211を撮像し、撮像画像を情報処理装置100に出力する。当該カメラ300は、単一若しくは複数個であってもよい。例えば、環境認識用のカメラ、ユーザ認識用のカメラ、投影領域撮影用のカメラを実空間に配置してもよい。また、撮像波長は、可視光域に限らず、紫外、赤外を含んでもよいし、特定波長領域に制限してもよい。例えばカメラ300は、RGBカメラとIRカメラが組み合わされたRGB-IRカメラであってもよい。この場合、情報処理装置100は、可視光画像(RGB画像、カラー画像とも称される)とIR画像を同時に取得し得る。 The camera 300 captures one or more users in the space and the projection area 211, and outputs the captured image to the information processing apparatus 100. The camera 300 may be single or plural. For example, a camera for environment recognition, a camera for user recognition, and a camera for shooting a projection area may be arranged in a real space. Further, the imaging wavelength is not limited to the visible light region, and may include ultraviolet rays and infrared rays, or may be limited to a specific wavelength region. For example, the camera 300 may be an RGB-IR camera in which an RGB camera and an IR camera are combined. In this case, the information processing apparatus 100 can simultaneously acquire a visible light image (also referred to as an RGB image or a color image) and an IR image.
 測距センサ320は、空間内の距離情報(デプスデータ)を検出し、情報処理装置100に出力する。測距センサ320は、空間内の3次元情報を網羅的に認識可能な3次元画像を取得でき、メカ機構によって駆動可能なデプスセンサにより実現されてもよい。また、測距センサ320は、赤外光を光源とした方式、超音波を用いた方式、複数台のカメラを用いた方式、および画像処理を用いた方式等により実現されてもよい。すなわち、測距センサ320(デプスセンサ)は、赤外線測距装置、超音波測距装置、LiDAR(Laser Imaging Detection and Ranging)又はステレオカメラ等の深度情報を取得する装置であってもよい。また、測距センサ320は、高精度な距離画像を取得できるToF(Time Of Flight)カメラであってもよい。また、測距センサ320は、単一若しくは複数個であってもよいし、空間内の距離情報を一括取得してもよい。 The distance measuring sensor 320 detects the distance information (depth data) in the space and outputs it to the information processing apparatus 100. The distance measuring sensor 320 may be realized by a depth sensor that can acquire a three-dimensional image that can comprehensively recognize three-dimensional information in space and can be driven by a mechanical mechanism. Further, the distance measuring sensor 320 may be realized by a method using infrared light as a light source, a method using ultrasonic waves, a method using a plurality of cameras, a method using image processing, or the like. That is, the range-finding sensor 320 (depth sensor) may be a device that acquires depth information such as an infrared range-finding device, an ultrasonic range-finding device, LiDAR (Laser Imaging Detection and Ringing), or a stereo camera. Further, the distance measuring sensor 320 may be a ToF (Time Of Flight) camera capable of acquiring a highly accurate distance image. Further, the distance measuring sensor 320 may be a single sensor or a plurality of sensors 320, or the distance information in the space may be collectively acquired.
 センサ300を実現するカメラ310および測距センサ320は、それぞれ異なる場所に設けられてもよいし、同一の場所に設けられてもよい。また、センサ300はカメラ310および測距センサ320に限定されず、さらに照度センサやマイクロホンにより実現されてもよい。また、センサ300は、投影領域211に設けられ、投影領域211に対するユーザ操作を検出するタッチセンサによりさらに実現されてもよい。 The camera 310 and the distance measuring sensor 320 that realize the sensor 300 may be provided at different places or may be provided at the same place. Further, the sensor 300 is not limited to the camera 310 and the distance measuring sensor 320, and may be further realized by an illuminance sensor or a microphone. Further, the sensor 300 may be further realized by a touch sensor provided in the projection area 211 and detecting a user operation on the projection area 211.
 <2-3.情報処理装置100>
 情報処理装置100は、I/F(Interface)部110、制御部120、入力部130、および記憶部140を含む。
<2-3. Information processing device 100>
The information processing apparatus 100 includes an I / F (Interface) unit 110, a control unit 120, an input unit 130, and a storage unit 140.
 (I/F部110)
 I/F部110は、情報処理装置100と他の機器とを接続するための接続装置である。I/F部110は、例えばUSB(Universal Serial Bus)コネクタ、有線/無線LAN(Local Area Network)、Wi-Fi(登録商標)、Bluetooth(登録商標)、ZigBee(登録商標)、携帯通信網(LTE(Long Term Evolution)、3G(第3世代の移動体通信方式)、4G(第4世代の移動体通信方式)、5G(第5世代の移動体通信方式))等により実現される。I/F部110は、プロジェクタ210、カメラ310、および測距センサとの間でそれぞれ情報の入出力を行う。
(I / F section 110)
The I / F unit 110 is a connection device for connecting the information processing device 100 and other devices. The I / F unit 110 includes, for example, a USB (Universal Serial Bus) connector, a wired / wireless LAN (Local Area Network), a Wi-Fi (registered trademark), a Bluetooth (registered trademark), a ZigBee (registered trademark), and a mobile communication network (registered trademark). It is realized by LTE (Long Term Evolution), 3G (3rd generation mobile communication method), 4G (4th generation mobile communication method), 5G (5th generation mobile communication method), and the like. The I / F unit 110 inputs / outputs information to / from the projector 210, the camera 310, and the distance measuring sensor, respectively.
 (制御部120)
 制御部120は、演算処理装置および制御装置として機能し、各種プログラムに従って情報処理装置100内の動作全般を制御する。制御部120は、例えばCPU(Central Processing Unit)、マイクロプロセッサ等の電子回路によって実現される。また、制御部120は、使用するプログラムや演算パラメータ等を記憶するROM(Read Only Memory)、及び適宜変化するパラメータ等を一時記憶するRAM(Random Access Memory)を含んでいてもよい。
(Control unit 120)
The control unit 120 functions as an arithmetic processing unit and a control device, and controls the overall operation in the information processing device 100 according to various programs. The control unit 120 is realized by an electronic circuit such as a CPU (Central Processing Unit) or a microprocessor. Further, the control unit 120 may include a ROM (Read Only Memory) for storing programs to be used, calculation parameters, and the like, and a RAM (Random Access Memory) for temporarily storing parameters and the like that change as appropriate.
 本実施形態による制御部120は、認識部121、視聴場所決定部122、および表示制御部123として機能する。 The control unit 120 according to the present embodiment functions as a recognition unit 121, a viewing location determination unit 122, and a display control unit 123.
 ・認識部121
 認識部121は、センサ300により検出された各種センシングデータ(撮像画像、デプスデータ等)に基づいて、実空間の環境およびユーザの認識を行う。例えば認識部121は、環境認識処理として、実空間の形状、実物体の存在(家電や家具の配置、形状等)、投影面の状態(粗さや色、反射率等)、投影領域の位置や大きさ、投影領域に載置されている障害物、投影領域への投影光を遮蔽する遮蔽物(家電や家具、ユーザ等)、照度環境、音環境等の認識を行う。
-Recognition unit 121
The recognition unit 121 recognizes the real space environment and the user based on various sensing data (captured image, depth data, etc.) detected by the sensor 300. For example, as an environment recognition process, the recognition unit 121 may include the shape of the real space, the existence of a real object (arrangement of home appliances and furniture, shape, etc.), the state of the projection surface (roughness, color, reflectance, etc.), the position of the projection area, and the like. It recognizes the size, obstacles placed in the projection area, obstacles that block the projected light to the projection area (home appliances, furniture, users, etc.), illuminance environment, sound environment, etc.
 また、認識部121は、ユーザ認識処理として、ユーザの有無、位置、人数、姿勢、視線方向、顔向き、手指のジェスチャ、ユーザ操作等を検出する。ユーザ操作は、投影画像(投影面)に対するタッチ操作や、IR発光部等がペン先に設けられたデジタルペンによる投影画像(投影面)に対する操作等が挙げられる。また、ユーザ操作として、コントローラやレーザー光を用いた操作等も挙げられる。また、認識部121は、マイクロホンによりユーザの発話音声を取得し、ユーザによる音声入力を認識してもよい。 Further, the recognition unit 121 detects the presence / absence of a user, the position, the number of people, the posture, the line-of-sight direction, the face orientation, the gesture of the fingers, the user operation, etc. as the user recognition process. Examples of the user operation include a touch operation on the projected image (projected surface) and an operation on the projected image (projected surface) by a digital pen provided with an IR light emitting unit or the like on the pen tip. Further, as a user operation, an operation using a controller or a laser beam can be mentioned. Further, the recognition unit 121 may acquire the user's spoken voice with a microphone and recognize the voice input by the user.
 ・視聴場所決定部122
 視聴場所決定部122は、ユーザの適切な視聴場所を決定する。例えば視聴場所決定部122は、プロジェクタ210の位置と、投影領域211の位置とに基づいて、ユーザの適切な視聴場所を決定する。適切な視聴場所とは、具体的には、投影領域211に表示される画像をユーザが視認する際の視認性の低下の恐れがより少ない場所を想定する。視認性の低下の恐れがより少ない場所とは、例えば、ユーザの視線方向に影が生じない場所や、投影画像(コンテンツ)をより大きく表示できる場所(投影領域211を有効的に利用できる場所)等が挙げられる。なお、本実施形態では、投影画像の表示向きを、ユーザの視線方向(または顔向き、身体の向き)に対して一意に定まるよう(具体的には正対するよう)にすることを前提として、投影領域211を有効的に利用できる場所を決定してもよい。
・ Viewing place determination unit 122
The viewing place determination unit 122 determines an appropriate viewing place for the user. For example, the viewing location determination unit 122 determines an appropriate viewing location for the user based on the position of the projector 210 and the position of the projection area 211. Specifically, the appropriate viewing place is assumed to be a place where there is less risk of deterioration in visibility when the user visually recognizes the image displayed in the projection area 211. A place where there is less risk of deterioration of visibility is, for example, a place where a shadow does not occur in the line-of-sight direction of the user or a place where a projected image (content) can be displayed larger (a place where the projection area 211 can be effectively used). And so on. In this embodiment, it is premised that the display orientation of the projected image is uniquely determined (specifically, facing the user) with respect to the user's line-of-sight direction (or face orientation, body orientation). A place where the projection area 211 can be effectively used may be determined.
 ・出力制御部123
 出力制御部123は、出力装置200からの情報の出力を制御する。例えば出力制御部123は、プロジェクタ210からの画像投影(表示)を制御する表示制御部として機能する。表示制御部としては、画像の投影場所や向きを制御し得る。また、出力制御部123は、さらにスピーカ(不図示)からの音声出力を制御する音声出力制御部として機能してもよい。また、出力制御部123は、その他様々な出力装置の制御部として機能し得る。
Output control unit 123
The output control unit 123 controls the output of information from the output device 200. For example, the output control unit 123 functions as a display control unit that controls image projection (display) from the projector 210. The display control unit can control the projection location and orientation of the image. Further, the output control unit 123 may further function as an audio output control unit that controls audio output from a speaker (not shown). Further, the output control unit 123 can function as a control unit of various other output devices.
 また、本実施形態による出力制御部123は、表示制御部としての機能として、ユーザを、適切な視聴場所に非明示的に誘導する表示制御を行い得る。例えば出力制御部123は、ユーザの視線方向に、あたかもユーザの影のように人影画像を投影し、視聴場所決定部122により決定された場所に近付く程、当該人影画像が小さくなるよう表示制御する。これにより、ユーザが視聴の邪魔になる自身の影(実際は投影された人影画像)を避けようと、影が小さくなる方向(誘導したい方向)に自然と自ら移動することが期待できる。また、出力制御部123は、コンテンツ(動画、静止画、Webサイト、テキスト、GUIなど)の表示状態を制御することで、ユーザを非明示的に誘導することも可能である。本実施形態による非明示的な誘導の詳細については後述する。 Further, the output control unit 123 according to the present embodiment can perform display control that implicitly guides the user to an appropriate viewing place as a function as a display control unit. For example, the output control unit 123 projects a human figure image in the line-of-sight direction of the user as if it were a shadow of the user, and controls the display so that the closer the person is to the place determined by the viewing place determination unit 122, the smaller the human image is. .. As a result, it can be expected that the user will naturally move in the direction in which the shadow becomes smaller (the direction in which he / she wants to be guided) in order to avoid his / her own shadow (actually, the projected human image) that interferes with viewing. Further, the output control unit 123 can implicitly guide the user by controlling the display state of the content (moving image, still image, website, text, GUI, etc.). Details of the implicit induction by this embodiment will be described later.
 (入力部130)
 入力部130は、情報処理装置100への入力情報を受け付ける。例えば入力部130は、ユーザによる操作指示を受け付ける操作入力部であってもよい。操作入力部は、タッチセンサ、圧力センサ、若しくは近接センサであってもよい。あるいは、操作入力部は、ボタン、スイッチ、およびレバーなど、物理的構成であってもよい。また、入力部130は、音声入力部(マイクロホン)であってもよい。
(Input unit 130)
The input unit 130 receives input information to the information processing device 100. For example, the input unit 130 may be an operation input unit that receives an operation instruction by the user. The operation input unit may be a touch sensor, a pressure sensor, or a proximity sensor. Alternatively, the operation input unit may have a physical configuration such as a button, a switch, and a lever. Further, the input unit 130 may be a voice input unit (microphone).
 (記憶部140)
 記憶部140は、制御部120の処理に用いられるプログラムや演算パラメータ等を記憶するROM(Read Only Memory)、および適宜変化するパラメータ等を一時記憶するRAM(Random Access Memory)により実現される。例えば、記憶部140は、I/F部110により外部装置から入力された各種情報や、制御部120により算出、生成された各種情報を記憶する。
(Memory unit 140)
The storage unit 140 is realized by a ROM (Read Only Memory) that stores programs and arithmetic parameters used for processing of the control unit 120, and a RAM (Random Access Memory) that temporarily stores parameters and the like that change as appropriate. For example, the storage unit 140 stores various information input from the external device by the I / F unit 110 and various information calculated and generated by the control unit 120.
 以上、本実施形態による情報処理装置100の構成について説明したが、情報処理装置100の構成は図3に示す例に限定されない。情報処理装置100は、例えばさらに出力部を有していてもよい。出力部は、例えば表示部または音声出力部(マイクロホン)により実現されていてもよい。表示部は、操作画面やメニュー画面等を出力し、例えば液晶ディスプレイ(LCD:Liquid Crystal Display)、有機EL(Electro Luminescence)ディスプレイなどの表示装置であってもよい。 Although the configuration of the information processing apparatus 100 according to the present embodiment has been described above, the configuration of the information processing apparatus 100 is not limited to the example shown in FIG. The information processing apparatus 100 may further have, for example, an output unit. The output unit may be realized by, for example, a display unit or an audio output unit (microphone). The display unit outputs an operation screen, a menu screen, or the like, and may be a display device such as a liquid crystal display (LCD: Liquid Crystal Display) or an organic EL (Electro Luminescence) display.
 また、情報処理装置100は、例えばスマートフォン、タブレット端末、PC(パーソナルコンピュータ)、HMD等により実現されてもよい。また、情報処理装置100は、出力装置200やセンサ300と同一空間に配置された専用装置であってもよい。また、情報処理装置100は、インターネット上のサーバ(クラウドサーバ)であってもよいし、中間サーバ、またはエッジサーバ等により実現されてもよい。 Further, the information processing device 100 may be realized by, for example, a smartphone, a tablet terminal, a PC (personal computer), an HMD, or the like. Further, the information processing device 100 may be a dedicated device arranged in the same space as the output device 200 and the sensor 300. Further, the information processing apparatus 100 may be a server (cloud server) on the Internet, or may be realized by an intermediate server, an edge server, or the like.
 また、情報処理装置100は、複数の装置により構成されていてもよいし、少なくとも一部の構成が出力装置200やセンサ300に設けられていてもよい。また、情報処理装置100の少なくとも一部の構成が、インターネット上のサーバ(クラウドサーバ)や、中間サーバ、エッジサーバ、出力装置200やセンサ300と同一空間に配置された専用装置、ユーザの個人端末(スマートフォン、タブレット端末、PC、HMD等)に設けられていてもよい。 Further, the information processing device 100 may be configured by a plurality of devices, or at least a part of the configuration may be provided in the output device 200 or the sensor 300. Further, at least a part of the configuration of the information processing device 100 is a server (cloud server) on the Internet, an intermediate server, an edge server, a dedicated device arranged in the same space as the output device 200 and the sensor 300, and a user's personal terminal. It may be provided in (smartphone, tablet terminal, PC, HMD, etc.).
 <<3.動作処理例>>
 続いて、本実施形態による適切な視聴場所への非明示的な誘導の動作処理例について説明する。
<< 3. Operation processing example >>
Subsequently, an operation processing example of implicit guidance to an appropriate viewing place according to the present embodiment will be described.
 <3-1.第1の誘導処理例>
 図4は、本実施形態による第1の誘導処理例の流れの一例を示すフローチャートである。第1の誘導処理例では、例えば居住空間内で画像投影を行う場合に、当該居住空間にユーザが入って来た際、すなわち画像投影を開始する前(ユーザがコンテンツの視聴を開始する前)を想定して説明する。
<3-1. First induction processing example>
FIG. 4 is a flowchart showing an example of the flow of the first induction processing example according to the present embodiment. In the first guidance processing example, for example, when an image is projected in a living space, when a user enters the living space, that is, before the image projection is started (before the user starts viewing the content). Will be explained assuming.
 図4に示すように、まず、情報処理装置100の認識部121は、カメラ310や測距センサ320により検出された各種センシングデータ(撮像画像やデプスデータ)に基づいて、居住空間の環境の認識を行う(ステップS103)。具体的には、認識部121は、居住空間の形状や、家具等の実物体の配置、形状等を認識する。また、認識部121は、居住空間に配置されたプロジェクタ210の位置や、投影領域211の場所等も認識する。なお、本実施形態では一例として居住空間の床面に投影領域211があるとする。認識部121は、プロジェクタ210の位置や投影方向、および投影可能な角度の情報から、投影領域211の場所や大きさを認識してもよい。また、プロジェクタ210の投影方向は、予めユーザが手動で設定および駆動制御してもよいし、情報処理装置100により自動で予め設定および駆動制御されてもよい。また、ステップS103に示す環境認識の処理は継続的に行われもよいし、一定時間毎に行われてもよい。また、環境認識の処理は、少なくとも画像投影(コンテンツの視聴)を開始する前に行われてもよいし、画像投影中にも継続して行われてもよい。環境認識の処理が継続的を行う場合、認識部121は、環境の変化(差異)を主に認識するようにしてもよい。 As shown in FIG. 4, first, the recognition unit 121 of the information processing apparatus 100 recognizes the environment of the living space based on various sensing data (captured images and depth data) detected by the camera 310 and the distance measuring sensor 320. (Step S103). Specifically, the recognition unit 121 recognizes the shape of the living space, the arrangement and shape of a real object such as furniture, and the like. The recognition unit 121 also recognizes the position of the projector 210 arranged in the living space, the location of the projection area 211, and the like. In this embodiment, it is assumed that the projection area 211 is provided on the floor surface of the living space as an example. The recognition unit 121 may recognize the location and size of the projection area 211 from the information on the position and projection direction of the projector 210 and the projectable angle. Further, the projection direction of the projector 210 may be manually set and drive-controlled by the user in advance, or may be automatically set and drive-controlled by the information processing apparatus 100. Further, the environment recognition process shown in step S103 may be continuously performed or may be performed at regular time intervals. Further, the environment recognition process may be performed at least before the start of image projection (viewing of content), or may be continuously performed during image projection. When the process of environment recognition is continuous, the recognition unit 121 may mainly recognize changes (differences) in the environment.
 次に、視聴場所決定部122は、適切な視聴場所を決定する(ステップS106)。具体的には、視聴場所決定部122は、光源であるプロジェクタ210の位置と、投影領域211の位置とに基づいて、ユーザの影が出現すること等による視認性の低下が回避できる視聴場所を決定する。例えば視聴場所決定部122は、プロジェクタ210とユーザの位置の延長線上にユーザの影が出現することを考慮し、影が出現する方向と視線方向が異なる場所を、適切な視聴場所として決定する。また、視聴場所決定部122は、コンテンツをできるだけ大きく表示したり、投影領域211を有効的に利用したりするため、投影領域211の外部(周辺)を、ユーザの適切な視聴場所に決定するようにしてもよい。例えば図2に示すような位置関係の場合は、視聴場所決定部122は、図2左下に示すエリアE3を適切な視聴場所に決定する。 Next, the viewing location determination unit 122 determines an appropriate viewing location (step S106). Specifically, the viewing location determination unit 122 determines a viewing location that can avoid deterioration of visibility due to the appearance of a user's shadow or the like based on the position of the projector 210 as a light source and the position of the projection area 211. decide. For example, the viewing place determination unit 122 determines a place where the direction in which the shadow appears and the line-of-sight direction are different from each other as an appropriate viewing place in consideration of the appearance of the shadow of the user on the extension line of the positions of the projector 210 and the user. Further, the viewing location determination unit 122 determines the outside (periphery) of the projection area 211 as an appropriate viewing location for the user in order to display the content as large as possible and effectively use the projection area 211. You may do it. For example, in the case of the positional relationship as shown in FIG. 2, the viewing place determination unit 122 determines the area E3 shown in the lower left of FIG. 2 as an appropriate viewing place.
 次いで、認識部121は、カメラ310や測距センサ320により検出された各種センシングデータ(撮像画像やデプスデータ)に基づいて、居住空間内におけるユーザの位置を認識する(ステップS109)。ここでは一例として、居住空間の出入口からユーザが入って来た場合を想定する。 Next, the recognition unit 121 recognizes the position of the user in the living space based on various sensing data (captured images and depth data) detected by the camera 310 and the distance measuring sensor 320 (step S109). Here, as an example, it is assumed that a user enters from the entrance / exit of the living space.
 次に、出力制御部123は、上記決定した適切な視聴場所への誘導が必要であるか否かを判断する(ステップS112)。具体的には、出力制御部123は、上記認識したユーザの位置が、上記決定した適切な視聴場所である場合は、誘導は必要ないと判断する。一方、上記認識したユーザの位置が、上記決定した適切な視聴場所ではない場合、出力制御部123は、誘導が必要であると判断する。例えば図2に示すような位置関係において、ユーザが、プロジェクタ210が設置されている付近(エリアE1)から部屋に入り、エリアE1に留まってコンテンツの視聴を開始すると、ユーザの視線方向に影が出現して視認性が低下する。したがって、出力制御部123は、適切な視聴場所として決定されたエリアE3にユーザを誘導する必要があると判断する。一方、例えばユーザがエリアE3付近から部屋に入ってきてエリアE3に留まっている場合、投影領域211に画像を投影してもユーザの影は視線方向とは反対側に生じて視認性は低下しないため、出力制御部123は、誘導の必要はないと判断する。 Next, the output control unit 123 determines whether or not it is necessary to guide to the appropriate viewing place determined above (step S112). Specifically, the output control unit 123 determines that guidance is not necessary when the recognized user's position is the appropriate viewing location determined above. On the other hand, if the recognized user's position is not the appropriate viewing place determined above, the output control unit 123 determines that guidance is necessary. For example, in the positional relationship as shown in FIG. 2, when the user enters the room from the vicinity where the projector 210 is installed (area E1), stays in the area E1 and starts viewing the content, a shadow is cast in the line-of-sight direction of the user. Appears and visibility is reduced. Therefore, the output control unit 123 determines that it is necessary to guide the user to the area E3 determined as an appropriate viewing place. On the other hand, for example, when the user enters the room from the vicinity of the area E3 and stays in the area E3, even if the image is projected on the projection area 211, the shadow of the user is generated on the side opposite to the line-of-sight direction and the visibility does not deteriorate. Therefore, the output control unit 123 determines that guidance is not necessary.
 次いで、誘導が必要な場合(ステップS112/Yes)、出力制御部123は、影画像の表示制御による誘導(ステップS115)、または、コンテンツの制御による誘導(ステップS118)を行う。影画像の表示制御による誘導は、主にユーザの居住空間内における物理的な位置を変化させたい場合(例えばエリアE1からエリアE3に移動させたい場合等)に用いられる。また、コンテンツの制御による誘導は、主にユーザの居住空間内における物理的な位置はほぼ変えずに、ユーザの顔向きや視線方向、身体の向き、姿勢等を変化させたい場合に用いられる。いずれの方法で誘導するかは予め設定されていてもよいし、状況に応じて適宜両方を並列または順次行ってもよい。これらはいずれも非明示的な誘導であり、システムからの強制感やシステムに監視されているような印象をユーザに与えずにユーザを誘導することが可能である。出力制御部123は、例えばユーザが居住空間内に入って来た際に、まずは影画像の表示制御によってユーザを適切な視聴場所に移動させたのち、コンテンツの制御によって、ユーザの顔向きや姿勢を変化させてもよい。 Next, when guidance is required (step S112 / Yes), the output control unit 123 performs guidance by controlling the display of the shadow image (step S115) or guidance by controlling the content (step S118). The guidance by the display control of the shadow image is mainly used when it is desired to change the physical position in the living space of the user (for example, when it is desired to move from the area E1 to the area E3). Further, the guidance by controlling the content is mainly used when it is desired to change the user's face direction, line-of-sight direction, body direction, posture, etc. without changing the physical position in the user's living space. Which method may be used for guidance may be preset, or both may be performed in parallel or sequentially depending on the situation. All of these are implicit guidances, and it is possible to guide the user without giving the user a feeling of coercion from the system or the impression of being monitored by the system. For example, when the user enters the living space, the output control unit 123 first moves the user to an appropriate viewing place by controlling the display of the shadow image, and then controls the content to control the face orientation and posture of the user. May be changed.
 以下、各誘導について具体的に説明する。 Hereinafter, each induction will be described in detail.
 (影画像の表示制御による誘導)
 影画像の表示制御による誘導として、出力制御部123は、ユーザの視線方向(身体が向いている方向等、ユーザの前方であって、ユーザの視野範囲に入る場所が望ましい)に人影画像(仮想オブジェクトの一例)をユーザの影のように表示する(例えばユーザの足元から影が延びているように人影画像を表示する)。これにより、ユーザは、床面に表示されるコンテンツを視聴する際に自身の影が邪魔になると考え、別の場所に自ら自然に移動することが期待される。出力制御部123は、ユーザが移動した場合は人影画像も追随させ、適切な視聴場所に移動した際には人影画像の表示を終了するようにしてもよい。
(Induction by controlling the display of shadow images)
As a guide by controlling the display of the shadow image, the output control unit 123 sets the shadow image (virtually) in the direction of the user's line of sight (preferably a place in front of the user and within the user's field of view, such as the direction in which the body is facing). Display an example of an object) like a user's shadow (for example, display a human figure image as if the shadow extends from the user's feet). As a result, the user thinks that his / her shadow will be an obstacle when viewing the content displayed on the floor, and is expected to move naturally to another place. The output control unit 123 may also follow the human figure image when the user moves, and may end the display of the human figure image when the user moves to an appropriate viewing place.
 人影画像は、リアルで細かな形状の人影であってもよいし、デフォルメ(細かな形状が省略)された人影であってもよい。人影は日常的にユーザが経験し得る身近な物理現象であるため、人影画像を用いることで、ユーザにより不自然さを感じさせず、非明示的に誘導することが可能となる。また、人影画像の色は、一般的に人影は黒や灰色であると認識されているため、黒や灰色に近い色とすることが好ましいが、本実施形態はこれに限定されず、青や緑、赤など、他の色であってもよい。出力制御部123は、例えばエリアE1やエリアE2付近からユーザが部屋に入って来た場合、ユーザの視線方向(概ねの顔の向きや頭部、身体の向きであってもよい)の投影領域211に、人影画像を投影するよう制御する。 The human figure image may be a real and finely shaped human figure, or may be a deformed (fine shape omitted) human figure. Since a human figure is a familiar physical phenomenon that a user can experience on a daily basis, by using a human figure image, it is possible to induce the user implicitly without making the user feel unnatural. Further, since it is generally recognized that the human shadow is black or gray, the color of the human shadow image is preferably a color close to black or gray, but the present embodiment is not limited to this, and blue or It may be another color such as green or red. When the user enters the room from the vicinity of the area E1 or the area E2, for example, the output control unit 123 is a projection area in the user's line-of-sight direction (which may be the general orientation of the face, the head, or the body). The 211 is controlled to project a human figure image.
 また、出力制御部123は、ユーザが適切な視聴場所の方向(正しい方向)に移動した際は人影画像を徐々に小さく、または/および、徐々に薄くする(透過率を高くする)表示制御を行い、適切な視聴場所の方向と異なる方向(誤った方向)に移動した際は、人影画像を徐々に大きく、または/および、徐々に濃くする(透過率を低くする)表示制御を行うようにしてもよい。すなわち、出力制御部123は、適切な視聴場所とユーザとの位置関係(距離など)の変化に応じて、影画像の表示状態を変化させる表示制御を行う。これにより、より確実に、ユーザを適切な視聴場所に非明示に誘導することが可能となる。ここで、図5に、本実施形態による人影画像を用いた誘導の一例について説明する図を示す。 Further, the output control unit 123 controls the display to gradually reduce the size and / or gradually reduce the size of the human figure image (increase the transmittance) when the user moves in the direction (correct direction) of the appropriate viewing place. When the image is moved in a direction different from the direction of the appropriate viewing place (wrong direction), the display control is performed so that the human image is gradually enlarged or / and gradually darkened (lowers the transmittance). You may. That is, the output control unit 123 performs display control that changes the display state of the shadow image according to the change in the positional relationship (distance, etc.) between the appropriate viewing place and the user. This makes it possible to more reliably guide the user to an appropriate viewing place implicitly. Here, FIG. 5 shows a diagram illustrating an example of guidance using a human figure image according to the present embodiment.
 図5に示すように、例えばユーザが、プロジェクタ210が設置されている場所に対して斜めに位置する場所(プロジェクタ210に対して横方向)から部屋に入ってきた際、出力制御部123は、人影画像530を表示する。なお、実際の影は、プロジェクタ210とユーザの延長線上の、視線方向とは異なる方向に生じている。 As shown in FIG. 5, for example, when a user enters a room from a place diagonally located (laterally to the projector 210) with respect to the place where the projector 210 is installed, the output control unit 123 receives the output control unit 123. The human figure image 530 is displayed. It should be noted that the actual shadow is generated in a direction different from the line-of-sight direction on the extension line of the projector 210 and the user.
 そして、出力制御部123は、ユーザが適切な視聴場所Tの方向に移動した場合には、ユーザの移動に人影画像530の表示位置も追随させると共に、人影画像530が小さくなるよう表示制御する。これにより、ユーザは、影がコンテンツ視聴の邪魔にならないよう、自然と、より影が小さくなる方向へと移動するため、非明示的に適切な視聴場所Tに誘導することが可能となる。一方、ユーザが適切な視聴場所Tの方向と異なる方向(例えばプロジェクタ210が設置されている付近に向かう方向)に移動した場合、出力制御部123は、ユーザの移動に人影画像530の表示位置も追随させると共に、人影画像530が大きくなるよう表示制御する。この場合、ユーザは、影がコンテンツ視聴の邪魔となるため、この方向への移動は好ましくないと考え、正しい方向(影が小さくなる方向)へと方向転換する可能性が高く、非明示的に適切な視聴場所Tに誘導することが可能となる。 Then, when the user moves in the direction of the appropriate viewing place T, the output control unit 123 causes the display position of the human figure image 530 to follow the movement of the user and controls the display so that the human figure image 530 becomes smaller. As a result, the user naturally moves in the direction in which the shadow becomes smaller so that the shadow does not interfere with the viewing of the content, so that the user can be implicitly guided to the appropriate viewing place T. On the other hand, when the user moves in a direction different from the direction of the appropriate viewing place T (for example, a direction toward the vicinity where the projector 210 is installed), the output control unit 123 also moves the user to the display position of the human figure image 530. The display is controlled so that the human figure image 530 becomes larger while following the image. In this case, the user thinks that moving in this direction is not preferable because the shadow interferes with the viewing of the content, and is likely to change the direction in the correct direction (the direction in which the shadow becomes smaller), implicitly. It becomes possible to guide to an appropriate viewing place T.
 なお、人影画像を用いた誘導は、上述したような投影領域211に対して左右方向(投影領域211に身体を向けているユーザにとって右または左の方向)への誘導に限定されず、例えば出力制御部123は、ユーザを後ろ方向(投影領域211に身体を向けているユーザにとって後ろ方向)に移動させたい場合、ユーザの視線方向に影を大きく表示し、ユーザが後ろに移動する程(例えば後ろ歩き、後ずさりする程)影を小さくする表示制御を行ってもよい。また、出力制御部123は、ユーザを前方向(投影領域211に身体を向けているユーザにとって前方向)に移動させたい場合、ユーザの視線方向に影を大きく表示し、ユーザが前に移動する程(投影領域211に近付く程)影を小さくする表示制御を行ってもよい。 It should be noted that the guidance using the human shadow image is not limited to the guidance in the left-right direction (right or left direction for the user facing the projection area 211) with respect to the projection area 211 as described above, and is, for example, an output. When the control unit 123 wants to move the user backward (backward for the user facing the projection area 211), the control unit 123 displays a large shadow in the line-of-sight direction of the user so that the user moves backward (for example,). Display control may be performed to reduce the shadow (as the person walks backward and moves backward). Further, when the output control unit 123 wants to move the user in the forward direction (forward for the user who is facing the projection area 211), the output control unit 123 displays a large shadow in the line-of-sight direction of the user and the user moves forward. Display control may be performed to reduce the shadow (the closer to the projection area 211).
 また、ここでは一例として人の影を示す人影画像を仮想オブジェクトの一例として用いる場合について説明したが、本実施形態はこれに限定されず、人影画像以外の仮想オブジェクトを用いてもよい。例えば出力制御部123は、丸や四角、三角、楕円、多角形、台形、ひし形、雲形等の様々な図形のシルエット画像を、仮想オブジェクトの他の例として用いてもよい。このような様々な形状によるシルエット画像を、図形影画像と称する。図形影画像の色は、一般的な人影と同様に、黒や灰色に近い色としてもよいし、青や緑、赤など、他の色としてもよい。 Further, although the case where a human shadow image showing a human shadow is used as an example of a virtual object is described here as an example, the present embodiment is not limited to this, and a virtual object other than the human shadow image may be used. For example, the output control unit 123 may use silhouette images of various figures such as circles, squares, triangles, ellipses, polygons, trapezoids, rhombuses, and cloud shapes as other examples of virtual objects. Silhouette images with such various shapes are called graphic shadow images. The color of the graphic shadow image may be a color close to black or gray, or may be another color such as blue, green, or red, as in the case of a general human figure.
 図6は、本実施形態による図形影画像を用いた誘導の一例について説明する図である。図6に示すように、例えば出力制御部123は、ユーザの視線方向かつ投影領域211に、円形の図形影画像570aを表示し、ユーザが正しい方向に移動している場合は図形影画像570d、570eのように円形の影をより小さく薄く表示するよう制御し、誤った方向に移動している場合は図形影画像570b、570cのように円形の影をより大きく濃く表示するようにしてもよい。 FIG. 6 is a diagram illustrating an example of guidance using a graphic shadow image according to the present embodiment. As shown in FIG. 6, for example, the output control unit 123 displays a circular graphic shadow image 570a in the line-of-sight direction of the user and in the projection area 211, and when the user is moving in the correct direction, the graphic shadow image 570d, It is possible to control the circular shadow to be displayed smaller and lighter as in 570e, and to display the circular shadow larger and darker as in the graphic shadow images 570b and 570c when moving in the wrong direction. ..
 以上説明した影画像の表示制御による非明示的な誘導は、ユーザがコンテンツの視聴を開始する前、すなわち投影領域211にコンテンツが投影される前に行われてもよい。また、コンテンツが既に投影されている際にも、影画像(仮想オブジェクト)をコンテンツに重ねて表示して非明示的に誘導を行ってもよい。 The implicit guidance by the display control of the shadow image described above may be performed before the user starts viewing the content, that is, before the content is projected on the projection area 211. Further, even when the content is already projected, a shadow image (virtual object) may be displayed overlaid on the content to implicitly guide the content.
 また、影画像をユーザの視線方向(身体の向いている方向、前方等)に表示する旨を説明したが、ユーザが投影領域211内に居る場合は、ユーザの視線方向(顔や身体の向き方向)に影画像を表示するようにし、ユーザが投影領域211の外に居る場合は、投影領域211のうちユーザに最も近い領域に表示するようにしてもよい。 Further, although it has been explained that the shadow image is displayed in the user's line-of-sight direction (direction in which the body is facing, forward, etc.), when the user is in the projection area 211, the user's line-of-sight direction (face or body direction) is explained. The shadow image may be displayed in the direction), and when the user is outside the projection area 211, the shadow image may be displayed in the area closest to the user in the projection area 211.
 (コンテンツの制御による誘導)
 コンテンツの制御による誘導は、出力制御部123が何らかのコンテンツを投影領域211に表示し、ユーザの顔向きや頭部の向き、視線方向、身体の向き、姿勢(立つ/座る/しゃがむ)等に応じてコンテンツの表示状態を変化させることで、ユーザがコンテンツをよく見ようと自ら自然に顔向きや姿勢等を変化させることが期待される方法である。
(Induction by controlling content)
In the guidance by controlling the content, the output control unit 123 displays some content in the projection area 211 and responds to the user's face orientation, head orientation, line-of-sight orientation, body orientation, posture (standing / sitting / squatting), etc. By changing the display state of the content, it is expected that the user will naturally change the face orientation, posture, etc. in order to see the content well.
 例えば、出力制御部123は、ユーザの顔向きや頭部の向き、視線方向等によってコンテンツのぼかしの強弱を制御し、ユーザがある視点(望ましい場所や方向)から見ると焦点が合ってよく見えるようにしてもよい。これによりユーザは、自ら自然と顔を傾けたり違う方向や違う場所から見ようと動いたりするため、非明示的に、望ましい顔向きや視線方向、場所に誘導することが可能となる。ここでは一例として「ぼかしの強弱」を挙げたが、これに限定されず、コンテンツの彩度や明度を制御したり、透過率を制御したりする制御により、非明示的にユーザの顔向き等を誘導することも可能である。 For example, the output control unit 123 controls the strength of the blurring of the content according to the user's face orientation, head orientation, line-of-sight direction, etc., and the user looks good in focus when viewed from a certain viewpoint (desirable place or direction). You may do so. As a result, the user naturally tilts his / her face or moves to look at it from a different direction or place, so that it is possible to implicitly guide the user to a desired face direction, line-of-sight direction, or place. Here, "the strength of blurring" is given as an example, but it is not limited to this, and by controlling the saturation and brightness of the content and controlling the transmittance, the user's face orientation etc. are implicitly used. It is also possible to induce.
 また、出力制御部123は、ユーザの実際の影が小さくなるようユーザを座らせたり、しゃがませたりしたい場合、床に表示(投影)するコンテンツを小さくすることで、ユーザを自然と座らせたり、しゃがませたりすることが可能となる。コンテンツを小さく表示した場合、ユーザはコンテンツを近くでよく見ようと自然と顔をコンテンツに近付け、座ったり、しゃがんだり、屈んだりすることが期待される。 Further, when the output control unit 123 wants to make the user sit or crouch so that the actual shadow of the user becomes small, the output control unit 123 naturally makes the user sit by reducing the content displayed (projected) on the floor. It is possible to crouch or crouch. When the content is displayed small, the user is expected to naturally bring his / her face closer to the content so that he / she can see the content closely, and to sit, crouch, or bend down.
 また、出力制御部123は、錯視を利用したコンテンツを提示することで、ユーザの顔向きや視線方向、姿勢等を非明示的に誘導することが可能となる。錯視を利用したコンテンツの一例としては、見る方向や角度によって何らかの文字や図形が見えたり、2Dが3Dに見えたりする画像が挙げられる。また、出力制御部123は、ユーザの顔向きや姿勢が誘導したい向きや視線になった際に、何らかの文字や図形を表示したり、2Dを3Dに変化させる制御を行ってもよい(すなわち、錯視画像のように見せる表示制御を行ってもよい)。これにより、ユーザが錯視画像への興味から自ら自然と顔向きや視線方向を調整することが期待される。 Further, the output control unit 123 can implicitly guide the user's face direction, line-of-sight direction, posture, etc. by presenting the content using the optical illusion. An example of content using an optical illusion is an image in which some characters or figures can be seen or 2D looks 3D depending on the viewing direction or angle. Further, the output control unit 123 may perform control to display some characters or figures or change 2D to 3D when the user's face direction or posture becomes the desired direction or line of sight (that is,). Display control may be performed to make the image look like an illusion image). As a result, it is expected that the user will naturally adjust the face direction and the line-of-sight direction from the interest in the illusion image.
 また、出力制御部123は、所定の方向に音源を定位して音を出力する制御を行い、ユーザを任意の方向(音源の方向)に振り向かせたり、視線を向けさせたりする非明示的な誘導を行い得る。 Further, the output control unit 123 controls to localize the sound source in a predetermined direction and output the sound, and implicitly turns the user in an arbitrary direction (direction of the sound source) or directs the line of sight. Guidance can be done.
 以上説明したコンテンツの制御による非明示的な誘導は、ユーザの物理的な位置移動の誘導を行った後に行われてもよい。また、誘導に用いるコンテンツは、ユーザが視聴するコンテンツ(動画、静止画、Webサイト、テキスト、GUIなど)であってもよいし、誘導用に用意したコンテンツであってもよい。 The implicit guidance by controlling the content described above may be performed after the guidance of the physical position movement of the user is performed. Further, the content used for guidance may be content to be viewed by the user (video, still image, website, text, GUI, etc.) or content prepared for guidance.
 以上、誘導が必要な場合の本実施形態による非明示的な誘導制御について具体的に説明した。誘導制御終了後、出力制御部123は、コンテンツの視聴を開始するようにしてもよい。なお、誘導制御を行ってもユーザが適切な視聴場所に移動しない場合も想定され得る。出力制御部123は、ある程度の非明示的な誘導を行ってもユーザが適切な視聴場所に移動しない場合であっても(若しくは適切な視聴場所以外に移動してしまった場合でも)、コンテンツの視聴を開始するようにしてもよい。 Above, the implicit guidance control according to the present embodiment when guidance is required has been specifically described. After the guidance control is completed, the output control unit 123 may start viewing the content. It should be noted that it may be assumed that the user does not move to an appropriate viewing place even if the guidance control is performed. The output control unit 123 of the content even if the user does not move to an appropriate viewing place (or even if the user moves to a place other than the appropriate viewing place) even if some implicit guidance is performed. You may start viewing.
 一方、誘導が必要ではない場合(ステップS112/No)、すなわちユーザの位置が既に適切な視聴場所である場合等は、出力制御部123は誘導を行わず、そのままコンテンツの視聴を開始してもよい。 On the other hand, when guidance is not required (step S112 / No), that is, when the user's position is already an appropriate viewing place, the output control unit 123 does not perform guidance and may start viewing the content as it is. good.
 コンテンツの視聴の開始とは、例えばメニュー画面の表示や、スタート動画の再生などであってもよい。また、出力制御部123は、ユーザの操作(ジェスチャ、投影領域211へのタッチ操作、所定のスイッチ/ボタン操作、音声入力、スマートフォン等の個人端末を利用した操作入力等)に応じて所定のコンテンツの視聴を開始してもよいし、自動的にコンテンツの視聴を開始してもよい。 The start of viewing the content may be, for example, displaying a menu screen or playing a start video. Further, the output control unit 123 responds to predetermined content according to a user's operation (gesture, touch operation to the projection area 211, predetermined switch / button operation, voice input, operation input using a personal terminal such as a smartphone, etc.). You may start watching the content, or you may start watching the content automatically.
 <3-2.第2の誘導処理例>
 次に、第2の誘導処理例について説明する。実空間の床面に投影領域211が存在する状況下においては、ユーザが実空間内を歩いて動き回り、投影領域211内に位置する場合も想定される。この際、ユーザの顔向きや身体の向きに正対するようコンテンツを表示制御することを前提とした場合、コンテンツの一部がユーザ自身によって遮蔽されたり、コンテンツを十分な大きさで表示できず、ユーザの視認性や快適性が低下する恐れがある。
<3-2. Second guidance processing example>
Next, an example of the second induction process will be described. In a situation where the projection area 211 exists on the floor surface of the real space, it is assumed that the user walks around in the real space and is located in the projection area 211. At this time, if it is assumed that the content is displayed and controlled so as to face the user's face orientation and body orientation, a part of the content is blocked by the user himself or the content cannot be displayed in a sufficient size. The visibility and comfort of the user may be reduced.
 そこで、第2の誘導処理例として、視聴中のコンテンツの制御によりユーザの顔向きや姿勢等を非明示的に誘導することを提案する。視聴中のコンテンツを用いた誘導の場合、ユーザの物理的な位置の移動等の身体的負荷が大きな誘導よりも、ユーザの顔向きや身体の向きを変えるといった身体的負荷が小さく短時間で視認性や快適性が向上する誘導が好ましい。以下、図7を参照して具体的に説明する。 Therefore, as a second guidance processing example, it is proposed to implicitly guide the user's face orientation, posture, etc. by controlling the content being viewed. In the case of guidance using the content being viewed, the physical load such as changing the user's face or body orientation is smaller than the guidance with a large physical load such as movement of the user's physical position, and visual recognition is performed in a short time. Guidance that improves sexuality and comfort is preferred. Hereinafter, a specific description will be given with reference to FIG. 7.
 図7は、本実施形態による第2の誘導処理例の流れの一例を示すフローチャートである。図7に示すように、まず、出力制御部123は、コンテンツの投影制御を行う(ステップS203)。コンテンツの投影制御は、ユーザ操作に従って所定のコンテンツの再生を開始するものであってもよいし、自動的に所定のコンテンツの再生を開始するものであってもよい。また、ここでは一例として居住空間の床にコンテンツを投影する場合を想定する。また、かかるコンテンツの投影制御前に、上述した第1の誘導処理が行われてもよい。 FIG. 7 is a flowchart showing an example of the flow of the second guidance processing example according to the present embodiment. As shown in FIG. 7, first, the output control unit 123 controls the projection of the content (step S203). The content projection control may start the reproduction of the predetermined content according to the user operation, or may automatically start the reproduction of the predetermined content. In addition, here, as an example, it is assumed that the content is projected on the floor of the living space. Further, the above-mentioned first guidance process may be performed before the projection control of the content.
 次に、認識部121は、居住空間の環境認識を行う(ステップS206)。なお、環境認識はコンテンツの投影制御の前に行ってもよいし、コンテンツの投影制御開始後に行ってもよい。居住空間の環境認識は、例えば空間認識や、投影領域211の認識、投影面の認識、遮蔽物の認識等が挙げられる。 Next, the recognition unit 121 recognizes the environment of the living space (step S206). The environment recognition may be performed before the projection control of the content or after the projection control of the content is started. Examples of the environment recognition of the living space include space recognition, recognition of the projection area 211, recognition of the projection surface, recognition of a shield, and the like.
 次いで、認識部121は、ユーザの位置を認識する(ステップS209)。また、認識部121は、ユーザの顔向き、頭部の向き、身体の向き、または視線方向の認識も併せて行い、出力制御部123は、ユーザの向きに正対するようコンテンツの向きを制御するようにしてもよい。また、出力制御部123は、ユーザの位置や向きを継続的に認識し、コンテンツ視聴中にユーザが移動した場合、ユーザの位置や向きに追随してコンテンツの表示位置や向きを変化させてもよい。これによりユーザは居住空間の好きな場所でコンテンツを視聴することができる。 Next, the recognition unit 121 recognizes the user's position (step S209). Further, the recognition unit 121 also recognizes the face orientation, the head orientation, the body orientation, or the line-of-sight direction of the user, and the output control unit 123 controls the orientation of the content so as to face the user orientation. You may do so. Further, the output control unit 123 continuously recognizes the position and orientation of the user, and when the user moves while viewing the content, the output control unit 123 follows the position and orientation of the user and changes the display position and orientation of the content. good. This allows the user to view the content anywhere in the living space.
 次に、出力制御部123は、プロジェクタ210と、投影領域211と、ユーザとの位置関係を取得する(ステップS212)。 Next, the output control unit 123 acquires the positional relationship between the projector 210, the projection area 211, and the user (step S212).
 次いで、出力制御部123は、上記位置関係に基づいて、視認性低下の可能性が高いか否かを判断する(ステップS215)。例えばユーザが投影領域211内に位置する場合、ユーザの向きやプロジェクタ210の位置によっては視認性が低下する可能性がある。図8は、ユーザが投影領域211内に位置する場合における視認性の低下について説明する図である。図8左に示すように、プロジェクタ210の光源方向とユーザの視線方向が略同一方向にある場合、ユーザの背面に光が当たるため、ユーザの視野範囲Sにユーザの影が生じ、視認性が低下する可能性が高い。一方、図8右に示すように、プロジェクタ210の光源方向とユーザの視線方向が対向する場合、ユーザの前方から光が到達するため、ユーザの視野範囲Sには影が生じず(影は、ユーザの背面側、プロジェクタ210とユーザの位置との延長線上に生じる)、視認性が低下する可能性は低い。 Next, the output control unit 123 determines whether or not there is a high possibility of a decrease in visibility based on the above positional relationship (step S215). For example, when the user is located in the projection area 211, the visibility may be deteriorated depending on the orientation of the user and the position of the projector 210. FIG. 8 is a diagram illustrating a decrease in visibility when the user is located within the projection area 211. As shown on the left side of FIG. 8, when the light source direction of the projector 210 and the line-of-sight direction of the user are substantially the same direction, the light hits the back surface of the user, so that the user's visual field range S is shaded by the user and the visibility is improved. It is likely to decrease. On the other hand, as shown on the right side of FIG. 8, when the light source direction of the projector 210 and the line-of-sight direction of the user face each other, the light arrives from the front of the user, so that no shadow is generated in the user's visual field range S (shadows are: It occurs on the back side of the user, on the extension of the projector 210 and the user's position), and the possibility of deterioration of visibility is low.
 また、投影領域211内のユーザの位置によっては、コンテンツの表示領域が狭く、コンテンツを縮小して表示しなければならない場合もある。 Further, depending on the position of the user in the projection area 211, the display area of the content may be narrow and the content may have to be reduced and displayed.
 このように、プロジェクタ210と、投影領域211と、ユーザとの位置関係によっては、視認性が低下する可能性が高い場合がある。視認性低下の可能性が高いか否かは、プロジェクタ210と、投影領域211と、ユーザとの位置関係が規定の条件を満たすか否かに基づいて判断されてもよい。例えばプロジェクタ210と投影領域211の位置がどのような位置の場合に、ユーザが投影領域211のどの位置に居ると視認性低下の可能性が高いかが予め条件設定されていてもよい。若しくは、制御部120は、機械学習により視認性低下の可能性が高い位置関係を取得してもよい。 As described above, there is a high possibility that the visibility is lowered depending on the positional relationship between the projector 210, the projection area 211, and the user. Whether or not there is a high possibility of deterioration in visibility may be determined based on whether or not the positional relationship between the projector 210, the projection area 211, and the user satisfies a predetermined condition. For example, when the positions of the projector 210 and the projection area 211 are located, the position in the projection area 211 where the user is likely to have a high possibility of deterioration in visibility may be set in advance. Alternatively, the control unit 120 may acquire a positional relationship with a high possibility of deterioration in visibility by machine learning.
 続いて、視認性低下の可能性が高いと判断した場合(ステップS215/Yes)、出力制御部123は、ユーザが視聴中のコンテンツの制御により、ユーザの顔向きや身体の向きを変える非明示的な誘導を行い、視認性を向上させる(ステップS218)。コンテンツの制御による非明示的な誘導の具体例について、図9を参照して以下説明する。 Subsequently, when it is determined that there is a high possibility that the visibility is deteriorated (step S215 / Yes), the output control unit 123 unspecifiedly changes the face orientation and body orientation of the user by controlling the content being viewed by the user. Guidance is performed to improve visibility (step S218). A specific example of implicit guidance by controlling the content will be described below with reference to FIG.
 図9は、第2の誘導処理例における視聴中のコンテンツの制御による非明示的な誘導について説明する図である。例えば、図9左に示すように投影領域211内にユーザが位置した場合、若しくはユーザが部屋の中を歩いて図9左に示すような位置になりそうな場合を想定する。この場合、図9左に示すような位置関係ではユーザの視線方向に影が出現してコンテンツ500の視聴の邪魔となり(影がコンテンツ500に重なり)視認性が低下する。したがって、出力制御部123は、コンテンツ500を制御して非明示的にユーザの顔向きや姿勢等を任意の方向や姿勢に誘導し、視認性を向上させる。 FIG. 9 is a diagram illustrating implicit guidance by controlling the content being viewed in the second guidance processing example. For example, it is assumed that the user is located in the projection area 211 as shown on the left side of FIG. 9, or the user is likely to walk in the room and reach the position shown on the left side of FIG. In this case, in the positional relationship as shown on the left side of FIG. 9, a shadow appears in the line-of-sight direction of the user and interferes with the viewing of the content 500 (the shadow overlaps the content 500), and the visibility is lowered. Therefore, the output control unit 123 controls the content 500 to implicitly guide the user's face orientation, posture, and the like to an arbitrary direction and posture to improve visibility.
 具体的には、例えば図9右上に示すように、出力制御部123は、ユーザの影が出現する場所と異なる場所(すなわち、ユーザの影が重ならない場所)にコンテンツ500を移動させる制御を行う。例えばユーザの位置を概ね中心としてコンテンツ500を回転させた場合、ユーザは物理的な移動といった身体的負荷無く、身体の向きを変えるだけで自身の影が邪魔にならない方向で視聴を継続することができ、視認性が向上する。コンテンツ500の移動は回転に限らず、例えばコンテンツ500を横方向にずらす等の制御であってもよい(平面的な移動制御の一例)。回転して表示するスペースがない場合や、回転する方向に障害物がある場合等は、コンテンツ500を横方向にずらすようにしてもよい。また、出力制御部123は、床ではなく壁に移動させて表示するようにしてもよい(立体的な移動制御の一例)。例えば出力制御部123は、コンテンツ500をユーザの横方向の壁に移動させることで、ユーザは物理的な移動といった身体的負荷無く、身体の向きを変えるだけで自身の影が邪魔にならない方向で視聴を継続することができ、視認性が向上する。 Specifically, for example, as shown in the upper right of FIG. 9, the output control unit 123 controls to move the content 500 to a place different from the place where the user's shadow appears (that is, the place where the user's shadow does not overlap). .. For example, when the content 500 is rotated around the user's position, the user can continue viewing in a direction in which his / her shadow does not get in the way by simply changing the direction of the body without physical load such as physical movement. It can be done and visibility is improved. The movement of the content 500 is not limited to rotation, and may be control such as shifting the content 500 in the lateral direction (an example of planar movement control). If there is no space to rotate and display, or if there is an obstacle in the direction of rotation, the content 500 may be shifted laterally. Further, the output control unit 123 may be moved to a wall instead of the floor for display (an example of three-dimensional movement control). For example, the output control unit 123 moves the content 500 to the lateral wall of the user so that the user does not have a physical load such as physical movement and simply changes the direction of the body so that his / her shadow does not get in the way. Viewing can be continued and visibility is improved.
 このように、出力制御部123は、ユーザの影が出現する場所と異なる場所であって、かつ、ユーザが物理的な移動といった身体的負荷無く、身体の向きや視線方向を変えるだけで自身の影が邪魔にならない方向で視聴を継続することができる場所にコンテンツ500を移動させ、視認性を向上させることが可能である。 In this way, the output control unit 123 is a place different from the place where the shadow of the user appears, and the user does not have a physical load such as physical movement, and simply changes the direction of the body and the direction of the line of sight of the user. It is possible to move the content 500 to a place where viewing can be continued in a direction in which the shadow does not get in the way, and improve the visibility.
 また、例えば図9右下に示すように、出力制御部123は、コンテンツ500を小さくする表示制御を行うことで、ユーザを自然としゃがませ、影の領域を小さくすることで視認性を向上させることも可能である。コンテンツ500が小さく表示されると、ユーザはコンテンツ500をよく見ようと顔を近付けるため自然と屈んだりしゃがんだりすることが期待される。なお、ユーザをしゃがませるためのコンテンツ500のかかる表示制御は一例であって、本実施形態はこれに限定されない。 Further, for example, as shown in the lower right of FIG. 9, the output control unit 123 naturally squats the user by performing display control for reducing the content 500, and improves visibility by reducing the shadow area. It is also possible to make it. When the content 500 is displayed small, the user is expected to naturally bend or crouch in order to bring his / her face closer to see the content 500. It should be noted that the display control of the content 500 for squatting the user is an example, and the present embodiment is not limited to this.
 以上、コンテンツの制御により、ユーザの顔向きや視線方向、身体の向き、また、姿勢を変えるよう誘導し、物理的な移動といった身体的負荷無く、視認性を向上させる場合について説明した。 Above, we have explained the case where the user is guided to change the face direction, line-of-sight direction, body direction, and posture by controlling the content, and the visibility is improved without physical load such as physical movement.
 なお、コンテンツの制御とユーザに誘発される動作は上述した例に限定されない。例えば、コンテンツの拡大制御によりユーザを立ち上がらせたり、コンテンツを床から壁や天井に移動させる制御(立体的な移動制御の一例)によりユーザの顔向きや頭部の向き、視線方向を変化させたりすることが可能である。 Note that the content control and user-induced actions are not limited to the above examples. For example, the user can be raised by controlling the enlargement of the content, or the user's face, head, and line of sight can be changed by controlling the content to move from the floor to the wall or ceiling (an example of three-dimensional movement control). It is possible to do.
 以上、視聴中のコンテンツの制御による非明示的な誘導について説明したが、本実施形態は視聴中のコンテンツの制御に限定されず、例えばユーザが投影領域211内に位置した状態でコンテンツの視聴を開始した/開始しようとした場合に、投影するコンテンツの制御により非明示的にユーザの視線方向や顔向き、身体の向き、姿勢等を誘導してもよい。 Although the implicit guidance by controlling the content being viewed has been described above, the present embodiment is not limited to controlling the content being viewed, and for example, the user can view the content while the user is located in the projection area 211. When starting / trying to start, the user's line-of-sight direction, face orientation, body orientation, posture, etc. may be implicitly guided by controlling the projected content.
 <<4.変形例>>
 以上説明した実施形態では、ユーザが一人の場合について説明したが、本開示はこれに限定されず、ユーザが複数の場合であっても、仮想オブジェクトやコンテンツの制御により非明示的にユーザを適切な視聴場所に移動させることが可能である。例えば出力制御部123は、複数のユーザが一緒に同じコンテンツを視聴する場合、一の適切な視聴場所に複数のユーザを非明示的に誘導する制御を行う。また、出力制御部123は、複数のユーザがそれぞれ異なるコンテンツを視聴する場合、コンテンツ毎の適切な視聴場所に各ユーザをそれぞれ非明示的に誘導するよう制御する。複数のユーザが実空間内に居る場合において、適切な視聴場所とは、例えば全員のユーザの影がいずれもコンテンツの視聴の邪魔にならない場所が望ましい。
<< 4. Modification example >>
In the embodiment described above, the case where there is only one user has been described, but the present disclosure is not limited to this, and even if there are a plurality of users, the user is implicitly appropriate by controlling virtual objects and contents. It is possible to move to a different viewing location. For example, the output control unit 123 controls to implicitly guide a plurality of users to one appropriate viewing place when a plurality of users view the same content together. Further, when a plurality of users view different contents, the output control unit 123 controls to implicitly guide each user to an appropriate viewing place for each content. When a plurality of users are in the real space, it is desirable that the appropriate viewing place is, for example, a place where the shadows of all the users do not interfere with the viewing of the content.
 一例として、居住空間の床面に投影領域211があり、投影領域211がある程度の大きさを持つ場合、複数のユーザが同時にそれぞれ別のコンテンツを視聴したりインタラクションを行ったりすることが想定される。このような場合におけるユーザの誘導やコンテンツの表示制御について、以下具体的に説明する。 As an example, when there is a projection area 211 on the floor surface of the living space and the projection area 211 has a certain size, it is assumed that a plurality of users simultaneously view different contents or interact with each other. .. User guidance and content display control in such cases will be specifically described below.
 図10は、変形例による第1の制御について説明する図である。図10左に示すように、まず、ユーザAが居住空間内でコンテンツ502の視聴や操作を行っている際に、後からユーザBが入って来た場合を想定する。この時、視聴場所決定部122は、投影領域211内に空きスペースがある場合は、空きスペースに新たなコンテンツを配置することを前提として、ユーザBの適切な視聴場所を決定する。 FIG. 10 is a diagram illustrating a first control according to a modified example. As shown on the left side of FIG. 10, first, it is assumed that the user B comes in later while the user A is viewing or operating the content 502 in the living space. At this time, if there is an empty space in the projection area 211, the viewing place determination unit 122 determines an appropriate viewing place for the user B on the premise that new content is arranged in the empty space.
 新たなコンテンツの配置場所や向き、また、ユーザBの適切な視聴場所は、プロジェクタ210の位置や、既に投影領域211に表示されユーザAが視聴しているコンテンツ502の表示位置を考慮して決定され得る。具体的には、ユーザBの影によりユーザBとユーザAの視聴を邪魔しない場所であって、また、その場所から視聴する新たなコンテンツが効率的に大きく表示できることが望ましい。例えば図10左に示す例では、プロジェクタ210に向かって投影領域211の右側に位置する視聴場所Tが、適切な視聴場所として決定される。 The location and orientation of new content and the appropriate viewing location for user B are determined in consideration of the position of the projector 210 and the display position of the content 502 that has already been displayed in the projection area 211 and is being viewed by user A. Can be done. Specifically, it is desirable that the place does not interfere with the viewing of the user B and the user A due to the shadow of the user B, and that the new content to be viewed from that place can be efficiently displayed in a large size. For example, in the example shown on the left side of FIG. 10, the viewing place T located on the right side of the projection area 211 with respect to the projector 210 is determined as an appropriate viewing place.
 次いで、出力制御部123は、ユーザBを適切な視聴場所Tに非明示的に誘導する。誘導方法は、図5や図6を参照して説明したような影画像を用いた誘導方法が挙げられる。そして、図10右に示すように、ユーザBは、自身の影により視認性が低下することなく、また、ユーザAの視聴も邪魔せずに、新たなコンテンツ504の視聴や操作を開始することが可能となる。 Next, the output control unit 123 implicitly guides the user B to an appropriate viewing place T. Examples of the guidance method include a guidance method using a shadow image as described with reference to FIGS. 5 and 6. Then, as shown on the right side of FIG. 10, the user B starts viewing and operating the new content 504 without deteriorating the visibility due to his / her shadow and without disturbing the viewing of the user A. Is possible.
 なお、ユーザBが居住空間内に入って来た際に、投影領域211に空きスペースがない場合も想定される。このような場合の制御について、以下、図11を参照して説明する。 It is also assumed that there is no empty space in the projection area 211 when the user B enters the living space. The control in such a case will be described below with reference to FIG.
 図11は、本実施形態の変形例による第2の制御について説明する図である。図11左に示すように、まず、ユーザAが居住空間内でコンテンツ502の視聴や操作を行っている際に、後からユーザBが入って来た場合を想定する。この時、視聴場所決定部122は、投影領域211内に空きスペースがない場合、コンテンツ502を縮小表示してユーザBのための空きスペースが確保されることを前提として、ユーザBの適切な視聴場所を決定する。新たなコンテンツの配置場所や向き、また、ユーザBの適切な視聴場所は、プロジェクタ210の位置や、既に投影領域211に表示されユーザAが視聴しているコンテンツ502の表示位置(空きスペースを確保するために縮小表示される場合は縮小表示された後の位置)を考慮して決定され得る。具体的には、ユーザBの影によりユーザBとユーザAの視聴を邪魔しない場所であって、また、その場所から視聴する新たなコンテンツが効率的に大きく表示できることが望ましい。例えば図11左に示す例では、プロジェクタ210に向かって投影領域211の右側に位置する視聴場所Tが、適切な視聴場所として決定される。 FIG. 11 is a diagram illustrating a second control according to a modified example of the present embodiment. As shown on the left side of FIG. 11, first, it is assumed that the user B comes in later while the user A is viewing or operating the content 502 in the living space. At this time, if there is no free space in the projection area 211, the viewing location determination unit 122 reduces the content 502 and secures a free space for the user B, so that the user B can properly view the content. Decide on a location. The location and orientation of the new content, and the appropriate viewing location for user B are the position of the projector 210 and the display position (securing empty space) of the content 502 that has already been displayed in the projection area 211 and is being viewed by user A. If it is reduced in size, it can be determined in consideration of the position after it is reduced in size). Specifically, it is desirable that the place does not interfere with the viewing of the user B and the user A due to the shadow of the user B, and that the new content to be viewed from that place can be efficiently displayed in a large size. For example, in the example shown on the left side of FIG. 11, the viewing place T located on the right side of the projection area 211 with respect to the projector 210 is determined as an appropriate viewing place.
 また、空きスペースを確保するためのコンテンツ502の表示制御では、視聴中のユーザAに物理的な位置の大きな移動等の身体的負荷が生じないことが好ましい。したがって、例えばコンテンツ502をユーザAの近くに縮小して表示したり、ユーザAの位置を概ね中心として回転させたり、ユーザAに対して横方向に多少移動させたり等の表示制御が挙げられる。また、コンテンツ502を、ユーザAの位置が見える壁に移動させる表示制御も挙げられる。 Further, in the display control of the content 502 for securing the empty space, it is preferable that the user A who is watching does not have a physical load such as a large movement of the physical position. Therefore, for example, display control such as reducing the content 502 closer to the user A and displaying it, rotating the content 502 around the center of the user A, or moving the content 502 laterally with respect to the user A can be mentioned. Further, display control for moving the content 502 to a wall where the position of the user A can be seen can be mentioned.
 次いで、出力制御部123は、ユーザBを適切な視聴場所Tに非明示的に誘導する。誘導方法は、図5や図6を参照して説明したような影画像を用いた誘導方法が挙げられる。出力制御部123は、コンテンツ502を縮小表示して空きスペースを確保してから誘導制御を行うようにしてもよい。そして、出力制御部123は、図11右に示すように、コンテンツ502が縮小されたことで確保された空きスペースに新たなコンテンツ504を表示する。適切な視聴場所Tに誘導されたユーザBは、自身の影により視認性が低下することなく、また、ユーザAの視聴も邪魔せずに、新たなコンテンツ504の視聴や操作を開始することが可能となる。 Next, the output control unit 123 implicitly guides the user B to an appropriate viewing place T. Examples of the guidance method include a guidance method using a shadow image as described with reference to FIGS. 5 and 6. The output control unit 123 may reduce the content 502 and secure an empty space before performing the guidance control. Then, as shown on the right side of FIG. 11, the output control unit 123 displays the new content 504 in the empty space secured by reducing the content 502. The user B guided to the appropriate viewing place T can start viewing or operating the new content 504 without deteriorating the visibility due to his / her shadow and without disturbing the viewing of the user A. It will be possible.
 なお、複数ユーザが居る場合のコンテンツの表示制御は、以上説明した例に限定されない。例えば出力制御部123は、ユーザBが居住空間内に入って来た際、コンテンツ502を縮小表示して確保した空きスペースに新たなコンテンツ504を配置する。次いで、認識部121により、コンテンツ504を視聴するために移動したユーザBの位置を認識し、プロジェクタ210やユーザBおよび各コンテンツの位置関係から、ユーザBの影がコンテンツ503やコンテンツ504に重ならないかを判断する。ユーザBの影がコンテンツ503やコンテンツ504に重なって視認性を低下させる恐れがある場合、出力制御部123は、上述したような仮想オブジェクトの表示やコンテンツの制御による非明示的な誘導で、ユーザBの位置や、顔向き、視線方向、姿勢等を変化させ、ユーザBの位置やコンテンツの表示を最適化してもよい。 Note that the content display control when there are multiple users is not limited to the example described above. For example, when the user B enters the living space, the output control unit 123 reduces and displays the content 502 and arranges the new content 504 in the reserved empty space. Next, the recognition unit 121 recognizes the position of the user B who has moved to view the content 504, and the shadow of the user B does not overlap the content 503 or the content 504 due to the positional relationship between the projector 210, the user B, and each content. To judge. When there is a risk that the shadow of the user B may overlap with the content 503 or the content 504 and reduce the visibility, the output control unit 123 may perform an implicit guidance by displaying the virtual object or controlling the content as described above. The position of the user B, the face orientation, the line-of-sight direction, the posture, and the like may be changed to optimize the position of the user B and the display of the content.
 またさらに人数が増えた場合も、各ユーザの位置と、プロジェクタ210(光源)の位置と、各コンテンツの位置の関係に基づいて、適宜各ユーザの位置やコンテンツの表示位置を制御することで、各ユーザの影等による視認性の低下を防ぐことが可能となる。 Even if the number of people increases, the position of each user and the display position of the content can be controlled as appropriate based on the relationship between the position of each user, the position of the projector 210 (light source), and the position of each content. It is possible to prevent the visibility from being lowered due to the shadow of each user.
 例えば、4人のユーザA~ユーザDが居住空間内に入って来た場合の各ユーザの誘導およびコンテンツの最適な配置の一例について、図12を参照して説明する。図12は、本実施形態の変形例による第3の制御について説明する図である。図12左に示すように、例えばユーザA~ユーザDが各出入口から居住空間内に入って来た場合、視聴場所決定部122は、プロジェクタ210の位置と投影領域211の位置から、それぞれ適切な視聴場所T1~T4を決定し、各ユーザを、影画像等を用いて非明示的に誘導する。そして、図12右示すように、全てのユーザが自身の影で他のユーザの視聴を妨げない位置で、コンテンツ506a~506dを、投影領域211を最大限に利用した大きさでそれぞれ視聴することが可能となり、視認性および快適性が向上する。コンテンツ506a~506dは、ユーザごとに異なるコンテンツであってもよいし、同じコンテンツであってもよい。例えばチームディスカッションの際に、同じコンテンツを共有してそれぞれ視聴することが想定される。 For example, an example of guidance of each user and optimal arrangement of contents when four users A to D enter the living space will be described with reference to FIG. 12. FIG. 12 is a diagram illustrating a third control according to a modified example of the present embodiment. As shown on the left side of FIG. 12, for example, when users A to D enter the living space from each entrance / exit, the viewing place determination unit 122 is appropriate from the position of the projector 210 and the position of the projection area 211, respectively. The viewing locations T1 to T4 are determined, and each user is implicitly guided by using a shadow image or the like. Then, as shown on the right side of FIG. 12, the contents 506a to 506d are viewed at a position where all the users do not interfere with the viewing of other users by their own shadows, respectively, in a size that maximizes the projection area 211. Is possible, and visibility and comfort are improved. The contents 506a to 506d may be different contents for each user or may be the same contents. For example, during a team discussion, it is assumed that the same content will be shared and viewed individually.
 なお、適切な視聴場所T1~T4の位置は図12に示す例に限定されず、適切な視聴場所T1~T4は、投影領域211内であってもよい。 Note that the positions of the appropriate viewing locations T1 to T4 are not limited to the example shown in FIG. 12, and the appropriate viewing locations T1 to T4 may be within the projection area 211.
 <<5.補足>>
 以上、添付図面を参照しながら本開示の好適な実施形態について詳細に説明したが、本技術はかかる例に限定されない。本開示の技術分野における通常の知識を有する者であれば、請求の範囲に記載された技術的思想の範疇内において、各種の変更例または修正例に想到し得ることは明らかであり、これらについても、当然に本開示の技術的範囲に属するものと了解される。
<< 5. Supplement >>
Although the preferred embodiments of the present disclosure have been described in detail with reference to the accompanying drawings, the present technique is not limited to such examples. It is clear that anyone with ordinary knowledge in the technical field of the present disclosure may come up with various modifications or modifications within the scope of the technical ideas set forth in the claims. Is, of course, understood to belong to the technical scope of the present disclosure.
 例えば、上述した実施形態では、床面に投影領域211が存在する場合について説明したが、本開示はこれに限定されず、壁や天井、テーブルの上等に投影領域211が存在する場合であってもよい。 For example, in the above-described embodiment, the case where the projection area 211 exists on the floor surface has been described, but the present disclosure is not limited to this, and there is a case where the projection area 211 exists on a wall, a ceiling, a table, or the like. You may.
 また、図10~図12に示すように投影領域211に複数のコンテンツを表示する場合、居住空間に設けられた複数のプロジェクタにより複数のコンテンツを投影するようにしてもよい。この際、視聴場所決定部122は、複数のプロジェクタの位置を考慮して、適切な視聴場所を決定する。 Further, when displaying a plurality of contents in the projection area 211 as shown in FIGS. 10 to 12, a plurality of contents may be projected by a plurality of projectors provided in the living space. At this time, the viewing location determination unit 122 determines an appropriate viewing location in consideration of the positions of the plurality of projectors.
 また、図4や図7に示すフローチャートの各ステップは、適宜並列に処理してもよいし、逆の順序で処理してもよい。 Further, each step of the flowchart shown in FIGS. 4 and 7 may be processed in parallel as appropriate, or may be processed in the reverse order.
 また、上述した実施形態では、プロジェクタ210の光源によりユーザの影が生じる旨を説明したが、影が生じる要因となる光源はプロジェクタ210に限定されない。視聴場所決定部122は、居住空間内に設置された照明装置など、プロジェクタ210以外の光源も考慮して影が生じる場所を算出し、適切な視聴場所を決定してもよい。 Further, in the above-described embodiment, it has been explained that the light source of the projector 210 causes a shadow of the user, but the light source that causes the shadow is not limited to the projector 210. The viewing place determination unit 122 may calculate a place where a shadow is generated in consideration of a light source other than the projector 210, such as a lighting device installed in a living space, and determine an appropriate viewing place.
 また、非明示的な誘導を行うための影画像(仮想オブジェクト)を表示する表示装置は、コンテンツを表示する表示装置と異なる表示装置を用いてもよいし、同じ表示装置を用いてもよい。 Further, as the display device for displaying the shadow image (virtual object) for performing implicit guidance, a display device different from the display device for displaying the content may be used, or the same display device may be used.
 また、仮想オブジェクトおよびコンテンツは、実空間に表示される画像の一例である。 Also, virtual objects and contents are examples of images displayed in real space.
 また、上述した実施形態では、出力装置200の一例として、実空間に画像を投影するプロジェクタ210を用いたが、本開示はこれに限定されない。出力装置200は、例えば、透過ディスプレイが設けられたメガネ型デバイスや、ユーザの頭部に装着されるHMDであってもよい。このような表示デバイスにおいて、コンテンツが実空間に重畳表示(AR(Augmented Reality)表示)されてもよい。また、出力制御部123は、人影画像等の仮想オブジェクトをAR表示したり、コンテンツの表示状態を制御することで、ユーザを非明示的に誘導し、任意の場所に移動させたり、任意の方向に顔を向けさせたり視線を向けさせたり、また、任意の姿勢に変化させたりして、視認性や快適性を向上させることが可能である。任意の場所は、視聴場所決定部122により、例えば実空間の形状や、家具等の配置、平面領域の位置や大きさ、表示するコンテンツの内容、実空間におけるユーザや他ユーザの位置、実空間に配置された照明装置等の光源、照明環境等の位置関係に基づいて決定されてもよい。また、実空間に重畳表示されるコンテンツの位置や姿勢は、実空間の床面や壁に対応付けられていてもよい。 Further, in the above-described embodiment, as an example of the output device 200, a projector 210 that projects an image in real space is used, but the present disclosure is not limited to this. The output device 200 may be, for example, a glasses-type device provided with a transmissive display or an HMD worn on the user's head. In such a display device, the content may be superimposed and displayed in real space (AR (Augmented Reality) display). Further, the output control unit 123 implicitly guides the user and moves it to an arbitrary place by displaying a virtual object such as a human figure image in AR or controlling the display state of the content, or in an arbitrary direction. It is possible to improve visibility and comfort by turning the face to the face, turning the line of sight, or changing the posture to an arbitrary position. An arbitrary place can be determined by the viewing place determination unit 122, for example, the shape of a real space, the arrangement of furniture, the position and size of a flat area, the content to be displayed, the position of a user or another user in the real space, and the real space. It may be determined based on the positional relationship between the light source of the lighting device and the like arranged in the space, the lighting environment, and the like. Further, the position and posture of the content superimposed and displayed in the real space may be associated with the floor surface and the wall in the real space.
 また、実空間にコンテンツをAR表示する表示デバイスは、スマートフォンやタブレット端末等であってもよい。また、非透過型の表示デバイスであっても、実空間の撮像画像をリアルタイムに表示部に表示し(所謂スルー画像の表示、ライブビューとも称される)、当該撮像画像にコンテンツを重畳表示することで、AR表示を実現し得る。 Further, the display device that AR-displays the content in the real space may be a smartphone, a tablet terminal, or the like. Further, even if it is a non-transparent display device, the captured image in real space is displayed on the display unit in real time (so-called through image display, also referred to as live view), and the content is superimposed and displayed on the captured image. Therefore, AR display can be realized.
 また、上述した情報処理装置100、出力装置200、またはセンサ300に内蔵されるCPU、ROM、およびRAM等のハードウェアに、情報処理装置100、出力装置200、またはセンサ300の機能を発揮させるための1以上のコンピュータプログラムも作成可能である。また、当該1以上のコンピュータプログラムを記憶させたコンピュータ読み取り可能な記憶媒体も提供される。 Further, in order to make the hardware such as the CPU, ROM, and RAM built in the information processing device 100, the output device 200, or the sensor 300 described above exhibit the functions of the information processing device 100, the output device 200, or the sensor 300. It is also possible to create one or more computer programs of. Also provided is a computer-readable storage medium that stores the one or more computer programs.
 また、本明細書に記載された効果は、あくまで説明的または例示的なものであって限定的ではない。つまり、本開示に係る技術は、上記の効果とともに、または上記の効果に代えて、本明細書の記載から当業者には明らかな他の効果を奏しうる。 Further, the effects described in the present specification are merely explanatory or exemplary and are not limited. That is, the techniques according to the present disclosure may have other effects apparent to those skilled in the art from the description herein, in addition to or in place of the above effects.
 なお、本技術は以下のような構成も取ることができる。
(1)
 実空間のセンシングデータから認識される画像の表示領域の位置と、ユーザの位置と、に基づいて、前記実空間における特定の視聴場所に前記ユーザを非明示的に誘導する表示制御を表示装置により行う制御部を備える、情報処理装置。
(2)
 前記制御部は、前記表示領域の位置と、前記実空間における光源の位置とに基づいて、前記特定の視聴場所を決定し、前記ユーザの位置が前記特定の視聴場所ではない場合、前記ユーザを前記特定の視聴場所に非明示的に誘導する表示制御を行う、前記(1)に記載の情報処理装置。
(3)
 前記光源の位置は、前記実空間に画像を投影する前記表示装置の位置を含む、前記(2)に記載の情報処理装置。
(4)
 前記表示領域は、前記表示装置による画像の投影が可能な範囲を含む、前記(3)に記載の情報処理装置。
(5)
 前記制御部は、前記表示領域の外部または内部であって、前記光源と前記ユーザの延長線上に出現する影が前記ユーザの視線方向と異なる位置となる場所を、前記特定の視聴場所として決定する、前記(2)~(4)のいずれか1項に記載の情報処理装置。
(6)
 前記制御部は、前記非明示的に誘導する表示制御として、前記画像に含まれる仮想オブジェクトを、前記ユーザの前方に表示する制御を行う、前記(1)~(5)のいずれか1項に記載の情報処理装置。
(7)
 前記制御部は、前記特定の視聴場所と、前記ユーザの位置との位置関係の変化に応じて、前記仮想オブジェクトの表示状態を変化させる制御を行う、前記(6)に記載の情報処理装置。
(8)
 前記制御部は、前記特定の視聴場所と、前記ユーザの位置とが近い程、前記仮想オブジェクトを小さく表示する、前記(7)に記載の情報処理装置。
(9)
 前記制御部は、前記特定の視聴場所と、前記ユーザの位置とが近い程、前記仮想オブジェクトの透過率を高く表示する、前記(7)または(8)に記載の情報処理装置。
(10)
 前記制御部は、前記仮想オブジェクトの表示位置を、前記ユーザの位置に追随させる制御を行う、前記(6)~(9)のいずれか1項に記載の情報処理装置。
(11)
 前記仮想オブジェクトは、人または図形の影画像である、前記(6)~(10)のいずれか1項に記載の情報処理装置。
(12)
 前記制御部は、前記非明示的に誘導する表示制御として、前記画像に含まれる、前記ユーザの視聴対象となるコンテンツの表示状態を変化させる、前記(1)~(5)のいずれか1項に記載の情報処理装置。
(13)
 前記制御部は、前記コンテンツに対する前記ユーザの顔向きまたは場所に応じて、前記コンテンツの表示状態を変化させる、前記(12)に記載の情報処理装置。
(14)
 前記制御部は、前記非明示的に誘導する表示制御として、前記表示領域の位置と、前記表示領域に前記コンテンツを投影する表示装置の位置と、前記コンテンツを視聴する前記ユーザの位置と、に基づいて、前記表示領域内において、前記表示装置と前記ユーザの延長線上に出現する影と重ならない場所に前記コンテンツを移動させる制御を行う、前記(12)に記載の情報処理装置。
(15)
 前記制御部は、前記実空間から複数のユーザを認識した場合、前記表示領域の位置と、前記表示領域にコンテンツを投影する表示装置の位置と、に基づいて、各ユーザが互いのコンテンツ視聴を妨げない各特定の視聴場所に各ユーザをそれぞれ非明示的に誘導する制御を行う、前記(1)~(14)のいずれか1項に記載の情報処理装置。
(16)
 前記制御部は、前記実空間から新たなユーザを認識した場合、既に前記実空間でコンテンツの視聴を行っている第1のユーザの位置と、当該第1のユーザが視聴している前記コンテンツの位置と、前記表示領域にコンテンツを投影する表示装置の位置と、に基づいて、前記新たなユーザを、前記第1のユーザの視聴を妨げない特定の視聴場所に非明示的に誘導する制御を行う、前記(1)~(15)のいずれか1項に記載の情報処理装置。
(17)
 プロセッサが、
 実空間のセンシングデータから認識される画像の表示領域の位置と、ユーザの位置と、に基づいて、前記実空間における特定の視聴場所に前記ユーザを非明示的に誘導する表示制御を表示装置により行うことを含む、情報処理方法。
(18)
 コンピュータを、
 実空間のセンシングデータから認識される画像の表示領域の位置と、ユーザの位置と、に基づいて、前記実空間における特定の視聴場所に前記ユーザを非明示的に誘導する表示制御を表示装置により行う制御部として機能させる、プログラム。
The present technology can also have the following configurations.
(1)
A display device performs display control that implicitly guides the user to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user. An information processing device provided with a control unit for performing.
(2)
The control unit determines the specific viewing place based on the position of the display area and the position of the light source in the real space, and if the position of the user is not the specific viewing place, the user is selected. The information processing device according to (1) above, which controls display to implicitly guide the user to the specific viewing location.
(3)
The information processing device according to (2), wherein the position of the light source includes the position of the display device that projects an image into the real space.
(4)
The information processing device according to (3) above, wherein the display area includes a range in which an image can be projected by the display device.
(5)
The control unit determines a place outside or inside the display area where the shadow appearing on the extension line of the light source and the user is different from the line-of-sight direction of the user as the specific viewing place. , The information processing apparatus according to any one of (2) to (4) above.
(6)
The control unit controls to display the virtual object included in the image in front of the user as the display control for implicitly guiding the image, according to any one of (1) to (5). The information processing device described.
(7)
The information processing device according to (6) above, wherein the control unit controls to change the display state of the virtual object according to a change in the positional relationship between the specific viewing location and the position of the user.
(8)
The information processing device according to (7), wherein the control unit displays the virtual object smaller as the specific viewing location and the user's position are closer to each other.
(9)
The information processing device according to (7) or (8), wherein the control unit displays the transmittance of the virtual object higher as the specific viewing location and the user's position are closer to each other.
(10)
The information processing device according to any one of (6) to (9) above, wherein the control unit controls the display position of the virtual object to follow the position of the user.
(11)
The information processing device according to any one of (6) to (10) above, wherein the virtual object is a shadow image of a person or a figure.
(12)
The control unit changes the display state of the content to be viewed by the user, which is included in the image, as the display control for implicitly guiding the user, according to any one of (1) to (5). The information processing device described in.
(13)
The information processing device according to (12), wherein the control unit changes the display state of the content according to the face orientation or location of the user with respect to the content.
(14)
The control unit sets the position of the display area, the position of the display device that projects the content onto the display area, and the position of the user who views the content as the display control that is implicitly guided. Based on the above (12), the information processing apparatus according to (12), wherein the content is controlled to be moved to a place in the display area that does not overlap with a shadow appearing on the extension line of the display device and the user.
(15)
When the control unit recognizes a plurality of users from the real space, each user views each other's content based on the position of the display area and the position of the display device that projects the content on the display area. The information processing device according to any one of (1) to (14) above, which controls to implicitly guide each user to each specific viewing place that does not interfere.
(16)
When the control unit recognizes a new user from the real space, the position of the first user who is already viewing the content in the real space and the content being viewed by the first user. Control to implicitly guide the new user to a specific viewing location that does not interfere with the viewing of the first user, based on the position and the position of the display device that projects the content onto the display area. The information processing apparatus according to any one of (1) to (15) above.
(17)
The processor,
A display device performs display control that implicitly guides the user to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user. Information processing methods, including what to do.
(18)
Computer,
A display device performs display control that implicitly guides the user to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user. A program that functions as a control unit.
 100 情報処理装置
  110 I/F部
  120 制御部
   121 認識部
   122 視聴場所決定部
   123 出力制御部
  130 入力部
  140 記憶部
 200 表示装置
  210 プロジェクタ
 300 センサ
  310 カメラ
  320 測距センサ
100 Information processing device 110 I / F unit 120 Control unit 121 Recognition unit 122 Viewing location determination unit 123 Output control unit 130 Input unit 140 Storage unit 200 Display device 210 Projector 300 Sensor 310 Camera 320 Distance measurement sensor

Claims (18)

  1.  実空間のセンシングデータから認識される画像の表示領域の位置と、ユーザの位置と、に基づいて、前記実空間における特定の視聴場所に前記ユーザを非明示的に誘導する表示制御を表示装置により行う制御部を備える、情報処理装置。 A display device performs display control that implicitly guides the user to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user. An information processing device provided with a control unit for performing.
  2.  前記制御部は、前記表示領域の位置と、前記実空間における光源の位置とに基づいて、前記特定の視聴場所を決定し、前記ユーザの位置が前記特定の視聴場所ではない場合、前記ユーザを前記特定の視聴場所に非明示的に誘導する表示制御を行う、請求項1に記載の情報処理装置。 The control unit determines the specific viewing place based on the position of the display area and the position of the light source in the real space, and if the position of the user is not the specific viewing place, the user is selected. The information processing apparatus according to claim 1, wherein display control for implicitly guiding to the specific viewing place is performed.
  3.  前記光源の位置は、前記実空間に画像を投影する前記表示装置の位置を含む、請求項2に記載の情報処理装置。 The information processing device according to claim 2, wherein the position of the light source includes the position of the display device that projects an image into the real space.
  4.  前記表示領域は、前記表示装置による画像の投影が可能な範囲を含む、請求項3に記載の情報処理装置。 The information processing device according to claim 3, wherein the display area includes a range in which an image can be projected by the display device.
  5.  前記制御部は、前記表示領域の外部または内部であって、前記光源と前記ユーザの延長線上に出現する影が前記ユーザの視線方向と異なる位置となる場所を、前記特定の視聴場所として決定する、請求項2に記載の情報処理装置。 The control unit determines a place outside or inside the display area where the shadow appearing on the extension line of the light source and the user is different from the line-of-sight direction of the user as the specific viewing place. , The information processing apparatus according to claim 2.
  6.  前記制御部は、前記非明示的に誘導する表示制御として、前記画像に含まれる仮想オブジェクトを、前記ユーザの前方に表示する制御を行う、請求項1に記載の情報処理装置。 The information processing device according to claim 1, wherein the control unit controls to display a virtual object included in the image in front of the user as the display control for implicitly guiding the image.
  7.  前記制御部は、前記特定の視聴場所と、前記ユーザの位置との位置関係の変化に応じて、前記仮想オブジェクトの表示状態を変化させる制御を行う、請求項6に記載の情報処理装置。 The information processing device according to claim 6, wherein the control unit controls to change the display state of the virtual object according to a change in the positional relationship between the specific viewing location and the position of the user.
  8.  前記制御部は、前記特定の視聴場所と、前記ユーザの位置とが近い程、前記仮想オブジェクトを小さく表示する、請求項7に記載の情報処理装置。 The information processing device according to claim 7, wherein the control unit displays the virtual object smaller as the specific viewing location and the user's position are closer to each other.
  9.  前記制御部は、前記特定の視聴場所と、前記ユーザの位置とが近い程、前記仮想オブジェクトの透過率を高く表示する、請求項7に記載の情報処理装置。 The information processing device according to claim 7, wherein the control unit displays the transmittance of the virtual object higher as the specific viewing location and the user's position are closer to each other.
  10.  前記制御部は、前記仮想オブジェクトの表示位置を、前記ユーザの位置に追随させる制御を行う、請求項6に記載の情報処理装置。 The information processing device according to claim 6, wherein the control unit controls the display position of the virtual object to follow the position of the user.
  11.  前記仮想オブジェクトは、人または図形の影画像である、請求項6に記載の情報処理装置。 The information processing device according to claim 6, wherein the virtual object is a shadow image of a person or a figure.
  12.  前記制御部は、前記非明示的に誘導する表示制御として、前記画像に含まれる、前記ユーザの視聴対象となるコンテンツの表示状態を変化させる、請求項1に記載の情報処理装置。 The information processing device according to claim 1, wherein the control unit changes the display state of the content to be viewed by the user, which is included in the image, as the display control for implicitly guiding the user.
  13.  前記制御部は、前記コンテンツに対する前記ユーザの顔向きまたは場所に応じて、前記コンテンツの表示状態を変化させる、請求項12に記載の情報処理装置。 The information processing device according to claim 12, wherein the control unit changes the display state of the content according to the face orientation or location of the user with respect to the content.
  14.  前記制御部は、前記非明示的に誘導する表示制御として、前記表示領域の位置と、前記表示領域に前記コンテンツを投影する表示装置の位置と、前記コンテンツを視聴する前記ユーザの位置と、に基づいて、前記表示領域内において、前記表示装置と前記ユーザの延長線上に出現する影と重ならない場所に前記コンテンツを移動させる制御を行う、請求項12に記載の情報処理装置。 The control unit sets the position of the display area, the position of the display device that projects the content onto the display area, and the position of the user who views the content as the display control that is implicitly guided. The information processing device according to claim 12, wherein the information processing device controls to move the content to a place that does not overlap with the shadow appearing on the extension line of the display device and the user in the display area.
  15.  前記制御部は、前記実空間から複数のユーザを認識した場合、前記表示領域の位置と、前記表示領域にコンテンツを投影する表示装置の位置と、に基づいて、各ユーザが互いのコンテンツ視聴を妨げない各特定の視聴場所に各ユーザをそれぞれ非明示的に誘導する制御を行う、請求項1に記載の情報処理装置。 When the control unit recognizes a plurality of users from the real space, each user views each other's content based on the position of the display area and the position of the display device that projects the content on the display area. The information processing device according to claim 1, which controls to implicitly guide each user to each specific viewing place that does not interfere.
  16.  前記制御部は、前記実空間から新たなユーザを認識した場合、既に前記実空間でコンテンツの視聴を行っている第1のユーザの位置と、当該第1のユーザが視聴している前記コンテンツの位置と、前記表示領域にコンテンツを投影する表示装置の位置と、に基づいて、前記新たなユーザを、前記第1のユーザの視聴を妨げない特定の視聴場所に非明示的に誘導する制御を行う、請求項1に記載の情報処理装置。 When the control unit recognizes a new user from the real space, the position of the first user who is already viewing the content in the real space and the content being viewed by the first user. Control to implicitly guide the new user to a specific viewing location that does not interfere with the viewing of the first user, based on the position and the position of the display device that projects the content onto the display area. The information processing apparatus according to claim 1.
  17.  プロセッサが、
     実空間のセンシングデータから認識される画像の表示領域の位置と、ユーザの位置と、に基づいて、前記実空間における特定の視聴場所に前記ユーザを非明示的に誘導する表示制御を表示装置により行うことを含む、情報処理方法。
    The processor,
    A display device performs display control that implicitly guides the user to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user. Information processing methods, including what to do.
  18.  コンピュータを、
     実空間のセンシングデータから認識される画像の表示領域の位置と、ユーザの位置と、に基づいて、前記実空間における特定の視聴場所に前記ユーザを非明示的に誘導する表示制御を表示装置により行う制御部として機能させる、プログラム。
    Computer,
    A display device performs display control that implicitly guides the user to a specific viewing place in the real space based on the position of the display area of the image recognized from the sensing data in the real space and the position of the user. A program that functions as a control unit.
PCT/JP2021/032984 2020-10-29 2021-09-08 Information processing device, information processing method, and program WO2022091589A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-181240 2020-10-29
JP2020181240 2020-10-29

Publications (1)

Publication Number Publication Date
WO2022091589A1 true WO2022091589A1 (en) 2022-05-05

Family

ID=81382253

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/032984 WO2022091589A1 (en) 2020-10-29 2021-09-08 Information processing device, information processing method, and program

Country Status (1)

Country Link
WO (1) WO2022091589A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006080344A1 (en) * 2005-01-26 2006-08-03 Matsushita Electric Industrial Co., Ltd. Guiding device and guiding method
WO2013132886A1 (en) * 2012-03-07 2013-09-12 ソニー株式会社 Information processing device, information processing method, and program
WO2018163637A1 (en) * 2017-03-09 2018-09-13 ソニー株式会社 Information-processing device, information-processing method, and recording medium
JP2019036181A (en) * 2017-08-18 2019-03-07 ソニー株式会社 Information processing apparatus, information processing method and program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006080344A1 (en) * 2005-01-26 2006-08-03 Matsushita Electric Industrial Co., Ltd. Guiding device and guiding method
WO2013132886A1 (en) * 2012-03-07 2013-09-12 ソニー株式会社 Information processing device, information processing method, and program
WO2018163637A1 (en) * 2017-03-09 2018-09-13 ソニー株式会社 Information-processing device, information-processing method, and recording medium
JP2019036181A (en) * 2017-08-18 2019-03-07 ソニー株式会社 Information processing apparatus, information processing method and program

Similar Documents

Publication Publication Date Title
US11449133B2 (en) Information processing apparatus and information processing method
JP6511386B2 (en) INFORMATION PROCESSING APPARATUS AND IMAGE GENERATION METHOD
US8199186B2 (en) Three-dimensional (3D) imaging based on motionparallax
US20170132845A1 (en) System and Method for Reducing Virtual Reality Simulation Sickness
JP2013174642A (en) Image display device
US20170264871A1 (en) Projecting device
US11284047B2 (en) Information processing device and information processing method
US11195320B2 (en) Feed-forward collision avoidance for artificial reality environments
US11961194B2 (en) Non-uniform stereo rendering
CN106168855B (en) Portable MR glasses, mobile phone and MR glasses system
JP2015084002A (en) Mirror display system and image display method thereof
JPWO2019039119A1 (en) Information processing apparatus, information processing method, and program
JP2007318754A (en) Virtual environment experience display device
KR102118054B1 (en) remote controller for a robot cleaner and a control method of the same
JP7452434B2 (en) Information processing device, information processing method and program
JPWO2017195514A1 (en) IMAGE PROCESSING APPARATUS, IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD, AND PROGRAM
WO2020017435A1 (en) Information processing device, information processing method, and program
JP2022183213A (en) Head-mounted display
WO2021200494A1 (en) Method for changing viewpoint in virtual space
WO2022091589A1 (en) Information processing device, information processing method, and program
US20180188543A1 (en) Display apparatus and method of displaying using electromechanical faceplate
US20240203066A1 (en) Methods for improving user environmental awareness
CN110060349B (en) Method for expanding field angle of augmented reality head-mounted display equipment
KR102118055B1 (en) remote controller for a robot cleaner and a control method of the same
US20240320930A1 (en) Devices, methods, and graphical user interfaces for capturing media with a camera application

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21885708

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21885708

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP