WO2017092369A1 - 一种头戴设备、三维视频通话系统和三维视频通话实现方法 - Google Patents
一种头戴设备、三维视频通话系统和三维视频通话实现方法 Download PDFInfo
- Publication number
- WO2017092369A1 WO2017092369A1 PCT/CN2016/090294 CN2016090294W WO2017092369A1 WO 2017092369 A1 WO2017092369 A1 WO 2017092369A1 CN 2016090294 W CN2016090294 W CN 2016090294W WO 2017092369 A1 WO2017092369 A1 WO 2017092369A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- audio
- video
- data
- unit
- cameras
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/147—Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/243—Image signal generators using stereoscopic image cameras using three or more 2D image sensors
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/296—Synchronisation thereof; Control thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/141—Systems for two-way working between two video terminals, e.g. videophone
- H04N7/142—Constructional details of the terminal equipment, e.g. arrangements of the camera and the display
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/14—Systems for two-way working
- H04N7/15—Conference systems
- H04N7/157—Conference systems defining a virtual conference space and using avatars or agents
Definitions
- the present invention relates to virtual reality technologies, and in particular, to a headset device, a three-dimensional video call system, and a method for implementing a three-dimensional video call.
- Virtual reality technology is a virtual environment (such as aircraft cockpit, molecular structure world, high-risk environment) that uses computer technology as the core and photoelectric sensor technology to generate realistic visual, audio and visual integration. If you use specific equipment (motion acquisition freedom spatial positioning, force feedback input, digital helmet, stereo display environment, etc.), you can naturally interact with the objects in the virtual world in real time, so as to generate the experience and experience of visiting the scene.
- specific equipment motion acquisition freedom spatial positioning, force feedback input, digital helmet, stereo display environment, etc.
- Some devices When conducting social activities, some users will have the need to show their environment to the other party they are communicating with, such as letting friends see their renovated new home or beautiful scenery around, but the existing equipment is not very Good to meet this demand.
- Some devices have a camera installed, but usually only install a camera, only the next part of the scene, if the user in different places want to observe the scene they are in, only follow the position of the camera to observe the scene, but not like the panorama The video can choose the location to observe.
- long-distance communication devices In addition, with the upgrade of mobile broadband network speeds, various video application softwares are constantly appearing, among which long-distance communication devices are widely used.
- the basic principle of the long-distance communication device is: collecting data through a camera and a microphone at one end, transmitting data through the network transmitting module, and displaying an image and playing sound on the other end device.
- this type of long-distance communication device is only a flat display and does not provide a perfect 3D representation of what the sender sees.
- the embodiments of the present invention provide a headset device, a three-dimensional video call system, and a three-dimensional video call implementation method, which can completely capture a video image of a scene in which a user is located, and implement real-time 3D video calls of both users. Share your own view to the scene.
- an embodiment of the present invention provides a headset device, including:
- a head-wearing device comprising a video output unit and a head fixing device; the video output unit is mounted on the head fixing device, the head-mounted device further comprising at least two cameras, the at least two cameras are mutually Arranged at intervals on the head fixture and/or the video output unit, the viewing angles of the at least two cameras are superimposed to cover a horizontal direction All directions.
- the at least two cameras are arranged on the same circumference.
- the at least two cameras are arranged equidistantly on the same circumference.
- the at least two cameras are on the same horizontal plane.
- the head fixing device comprises: a first fixing component and a second fixing component;
- the first fixing member and the second fixing member each include a fixed end and a free end and an arc portion connecting the fixed end and the free end;
- the first fixing member and the second fixing member surround an annular space for fixing a user's head
- the at least two cameras are arranged on the first fixing part and/or the second fixing part, and the photographing direction is toward the outside of the first fixing part and/or the second fixing part.
- the headset device further includes a panoramic image splicing device connected to each camera.
- the head mounted device further includes a video transmitting device connected to the panoramic image splicing device.
- the wearing device further includes an environmental obstacle detecting device connected to each camera.
- the wearing device further includes an environmental obstacle detecting device connected to the panoramic image splicing device.
- the headset further comprises a video capture unit, wherein the video capture unit is composed of two adjacent cameras, and the two adjacent cameras synchronously capture the same scene according to the distance between the two pupils of the human eye. Obtain left and right eye image data at different angles.
- the headset further includes an audio collection unit, an audio and video coding unit, a central processing unit, an audio and video decoding unit, and an audio output unit;
- the audio collection unit includes at least one microphone for synchronously picking up audio data
- the audio and video encoding unit is configured to receive left and right eye image data collected by the video capturing unit, and audio data that is synchronously picked up by the audio collecting unit, and the left and right eye image data and the audio Data encoded into audio and video data is sent to the central processing unit;
- the central processing unit is configured to receive the audio and video data sent by the audio and video coding unit, and send the audio and video data to the opposite end; and receive the audio and video data sent by the opposite end, and send the opposite end Audio and video data is sent to the audio and video decoding unit;
- the audio and video decoding unit is configured to decode the audio and video data sent by the opposite end into corresponding left and right eye image data and audio data, and send the decoded left and right eye image data to the video output unit.
- the 3D video is displayed, and the decoded audio data is sent to the audio output unit for audio output.
- the video output unit includes a left display screen for receiving a left eye image and a right for receiving a right eye image a display screen, a left optical system is disposed on a side of the left display near the human eye, and a right optical system is disposed on a side of the right display near the human eye, and the left and right optical systems are used to make a person's left
- the eye and the right eye synthesize a stereoscopic image having a three-dimensional effect when the left display and the right display are seen through the corresponding optical system.
- the headset further includes an optical system control knob, the optical system control knob is configured to adjust a distance between the virtual screen formed by the left and right optical systems and the human eye, and control the human eye to view Shows the display scale of the content.
- the headset further includes a camera distance adjustment knob for fine-tuning the horizontal distance between the two cameras.
- the headset further includes a camera angle adjustment knob for adjusting an angle of each camera.
- the headset further includes an audio and video switching unit, and the audio and video switching unit is configured to collect the audio and video data sent by the central processing unit to the audio and video decoding unit according to a user instruction. Audio and video data or restore audio and video data sent in the opposite end.
- the audio collection unit comprises two or more microphones
- the central processing unit is further configured to perform stereo processing on the audio data picked up by the two or more microphones.
- the present invention provides a three-dimensional video calling system, including: a network server, at least two headsets provided by the foregoing technical solutions;
- Any two of the headsets establish a connection through the network server and implement a three-dimensional video call through the network server.
- the present invention provides a method for implementing a three-dimensional video call, which utilizes the headset provided by the foregoing technical solution, including:
- the transmitting the decoded left and right eye image data to the video output unit for 3D video display comprises:
- the left display receives the left eye image
- the right display receives the right eye image
- the system uses an optical system that is placed on the side of the left display close to the human eye and a right light that is placed on the right display near the side of the human eye.
- the system combines the images displayed on the left and right display screens into a stereoscopic image with a three-dimensional effect.
- the headset device provided by the embodiment of the present invention includes at least two cameras arranged on the head fixing device and/or the video output unit at intervals. It can capture the video image of the scene where the user is located from more directions and more completely.
- the adjacent two cameras constitute a video capture unit of the headset, and the adjacent The two cameras capture image data of the left and right eyes with left and right eye parallax, and synchronously pick up the audio data by using the audio collection unit of the headset, and the image data of the left and right eyes and the audio data picked up synchronously
- the device is sent to the opposite end device for 3D display, and the audio and video data sent by the opposite end is received.
- the obtained left and right eye image data is sent to the local video output unit for 3D video display, and the obtained audio is obtained.
- the data is sent to the audio output unit of the local end for audio output, so that real-time 3D video calls of both users can be realized, and each other can share the scene.
- FIG. 1 is a schematic structural view of a head wear device according to an embodiment of the present invention.
- FIG. 2 is a schematic diagram showing the arrangement of cameras in a headset device according to an embodiment of the present invention
- FIG. 3 is a schematic structural diagram of a headset device according to Embodiment 1 of the present invention.
- FIG. 4 is a schematic structural diagram of a structure of a three-dimensional video call system according to Embodiment 2 of the present invention.
- FIG. 5 is a flowchart of a method for implementing a three-dimensional video call according to Embodiment 3 of the present invention.
- an embodiment of a headwear device of the present invention includes a video output unit 1 and a head fixing device 2; the video output unit 1 is mounted on the head fixing device 2, and the headwear device further includes At least two cameras 3, which are arranged at a distance from one another on the head fixture 2 and/or the video output unit 1.
- the head-wearing device provided by the embodiment of the present invention includes at least two cameras arranged on the head fixing device and/or the video output unit at intervals, so that the orientation can be more complete and more complete. Take a video image of the scene in which the user is located.
- the video output unit 1 may employ a video output unit in an existing headset, such as may include associated display elements and optical components, and the like.
- the head fixing device 2 includes: a first fixing member 21 and a second fixing member 22; each of the first fixing member 21 and the second fixing member 22 includes a fixed end And a free end and an arc portion connecting the fixed end and the free end; the first fixing member and the second fixing member surround an annular space for fixing a user's head; the at least two cameras 3 are arranged On the first fixing member 21 and/or the second fixing member 22, and the photographing direction is toward the outside of the first fixing member and/or the second fixing member.
- each of the cameras 3 can employ a camera of the prior art.
- the number of cameras can be determined according to the shooting range of a single camera, that is, the angle of view. For example, the camera's field of view (FOV, Field of Vision) is large, then the pitch can be enlarged. If you need to get a panoramic video image of the scene in which the headset is located, and the camera's FOV can reach 180 degrees, then only two cameras installed at 180 degrees apart.
- FOV Field of Vision
- the scenes captured by the arranged cameras are combined to obtain a panoramic video image of the scene in which the headset is located.
- the number of cameras is four, and the shooting ranges of adjacent cameras are just connected or partially coincident.
- the at least two cameras are arranged on the same circumference.
- the at least two cameras are arranged equidistantly on the same circumference.
- the at least two cameras are on the same horizontal plane when the headgear device is fixed on the user's head by the head fixing device.
- Image Stitching is a technique that uses real-life images to form a panoramic space. It combines multiple images into a large-scale image or a 360-degree panorama.
- the image stitching technology involves computer vision, computer graphics, and digital image processing. And some techniques such as math tools.
- the basic steps of image stitching mainly include the following aspects: camera calibration, sensor image distortion correction, image projection transformation, matching point selection, panoramic image stitching (fusion), and brightness and color equalization processing.
- the technique of image stitching is prior art, and the present invention will not be described too much herein.
- the headset may further include a video transmitting device connected to the panoramic image splicing device. In this way, it is convenient to transmit the video to other users in different places for viewing.
- the wearing device may further include an environmental obstacle detecting device connected to each camera.
- an obstacle can be detected by an image using an existing technique.
- the environmental obstacle detecting device may also be connected to the panoramic image splicing device.
- the real-time panoramic video of the obtained scene of the user can be used to locally detect the surrounding obstacles through the image of the surrounding scene.
- the present invention simulates a process of observing a scene by a human eye, and selecting two adjacent cameras in the above-mentioned headwear device to form a video capturing unit, and using the two adjacent cameras to collect image data having left and right eye parallax, and transmitting the image data to The video output unit of the headset performs 3D display.
- Embodiment 1 is a diagrammatic representation of Embodiment 1:
- FIG. 3 is a schematic structural diagram of a headset device according to the embodiment.
- the headset device in FIG. 3 includes: a video collection unit 10, an audio collection unit 11, an audio and video coding unit 12, a central processing unit 13, and an audio and video decoding unit. 14.
- the video capture unit 10 is composed of two adjacent cameras.
- the two adjacent cameras simultaneously capture the same scene according to the distance between the two pupils of the human eye, and obtain left and right eye image data of different angles.
- the audio collection unit 11 includes at least one microphone for synchronously picking up audio data.
- the audio and video encoding unit 12 is configured to receive the left and right eye image data collected by the video capturing unit 10, and the audio data picked up by the audio collecting unit 11 and encode the left and right eye image data and the audio data data into audio and video data. Send to the central processing unit 13.
- the central processing unit 13 is configured to receive the audio and video data sent by the audio and video encoding unit 12, and send the audio and video data to the opposite end; and receive the audio and video data sent by the opposite end, and send the audio and video sent by the end
- the data is sent to the audio and video decoding unit 14.
- the central processing unit 13 includes a network transmission module, and sends the audio and video data sent by the audio and video encoding unit 12 to the opposite end by using the network transmission module, and receives the encoded audio and video data sent by the opposite end by using the network transmission module.
- the audio and video decoding unit 14 is configured to decode the audio and video data sent by the opposite end into corresponding left and right eye image data and audio data, and send the decoded left and right eye image data to the video output unit 15 for 3D video. Displaying, and transmitting the decoded audio data to the audio output unit 16 for audio output.
- the headset device of the embodiment uses a video capture unit composed of two adjacent cameras to transmit the image data collected by two adjacent cameras and the audio data collected by the audio collection unit to the opposite end through the network. Both sides Real-time 3D video calls, sharing each other and watching the scene.
- the video output unit 15 includes a left display screen for receiving a left eye image and a right display screen for receiving a right eye image, and the left display side is disposed with a left side near the human eye.
- the right display is located near the side of the human eye with a right optical system, and the left and right optical systems are used to make the left and right eyes of the person see the left display and the right display through the corresponding optical system. , synthesize a stereoscopic image with a three-dimensional effect.
- the headset further includes an optical system control knob for adjusting the distance between the virtual screen formed by the left and right optical systems and the human eye, and controlling the display viewed by the human eye.
- the display ratio of the content For example, when the virtual screen formed by the left and right optical systems is at the maximum distance from the human eye, the local video output unit is set to display the video content sent by the opposite user; when the user wants to view the video more clearly
- the distance between the virtual screen formed by the left and right optical systems and the human eye is narrowed.
- the headset further includes an audio and video switching unit, configured to switch the audio and video data sent by the central processing unit to the audio and video decoding unit according to a user instruction, to switch the audio and video data collected by the terminal. , or restore the audio and video data sent in the opposite end.
- an audio and video switching unit configured to switch the audio and video data sent by the central processing unit to the audio and video decoding unit according to a user instruction, to switch the audio and video data collected by the terminal. , or restore the audio and video data sent in the opposite end.
- the headset of the embodiment further includes a camera distance adjustment knob and a camera angle adjustment knob; wherein the camera distance adjustment knob is used to fine-tune the horizontal distance between the two cameras to adapt to different The user's interpupillary distance avoids ghosting and improves the viewing effect.
- the camera angle adjustment knob is used to adjust the angle of each camera to meet the user's needs for shooting at different angles of the scene.
- the audio collecting unit 11 includes two or more microphones.
- the central processing unit 13 is also used for stereo processing of audio data picked up by two or more microphones.
- Embodiment 2 is a diagrammatic representation of Embodiment 1:
- This embodiment provides a three-dimensional video communication system based on the same technical concept as that of the first embodiment.
- FIG. 4 is a schematic structural diagram of a three-dimensional video call system according to the embodiment of the present invention, the three-dimensional video call system of FIG. 4 includes: a network server 40 and at least two headsets 41 provided in the first embodiment;
- Any two of the headsets 41 establish a connection through the web server 40 and implement a three-dimensional video call through the web server 40.
- the network server is used to establish a connection between the headsets, and the data transmission between the headsets is implemented through the network server, so that real-time 3D video calls between the headsets are realized, and the scenes are shared with each other.
- Embodiment 3 is a diagrammatic representation of Embodiment 3
- FIG. 5 is a flowchart of a method for implementing a three-dimensional video call according to the embodiment. As shown in FIG. 5, the method in the third embodiment includes:
- S510 Acquire two adjacent cameras on the headset to simultaneously capture the same scene according to the distance between the two pupils of the human eye to obtain left and right eye image data of different angles, and acquire audio of at least one microphone synchronously picked up on the headset. data.
- the horizontal distance between the two cameras is fine-tuned to adapt to the pupil distance of different users.
- the left and right eye image data and the audio data data are encoded into audio and video data.
- S530 Send the audio and video data to the peer end, and receive the audio and video data sent by the opposite end.
- S540 Decode the audio and video data sent by the opposite end into corresponding left and right eye image data and audio data, and send the decoded left and right eye image data to the video output unit for 3D video display, and decode the decoded The audio data is sent to the audio output unit for audio output.
- the decoded left and right eye image data is sent to the video output unit for 3D video display, including:
- the left display receives the left eye image
- the right display receives the right eye image
- the images displayed on the left and right display screens are combined into a stereoscopic image having a three-dimensional effect by using an optical system disposed on a side of the left display close to the human eye and a right optical system disposed on the side of the right display close to the human eye.
- the above solution further includes: adjusting a distance between the virtual screen formed by the left and right optical systems and the human eye, and controlling display content viewed by the human eye and display ratio of the display content.
- the audio and video data source is switched to the audio and video data collected by the end according to the user instruction, or is restored to the audio and video data sent by the opposite end.
- the headset device provided by the embodiment of the present invention can include more than two cameras disposed on the head fixing device and/or the video output unit at intervals. A more complete picture of the video of the scene in which the user is located.
- the three-dimensional video call system and the three-dimensional video call implementation method provided by the embodiments of the present invention by simulating the process of observing the scene by the human eye, the adjacent two cameras constitute a video capture unit of the headset, and the adjacent The two cameras capture image data of the left and right eyes with left and right eye parallax, and synchronously pick up the audio data by using the audio collection unit of the headset, and the image data of the left and right eyes and the audio data picked up synchronously
- the device is sent to the opposite end device for 3D display, and the audio and video data sent by the opposite end is received.
- the obtained left and right eye image data is sent to the local video output unit for 3D video display, and the obtained audio is obtained.
- the data is sent to the audio output unit of the local end for audio output, so that real-time 3D video calls of both users can be realized, and each other can share the scene.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
Abstract
本发明的实施例公开一种头戴设备、三维视频通话系统和三维视频通话实现方法,涉及虚拟现实技术,为能够较为完整地拍摄使用者所处场景的视频图像,实现双方用户实时的3D视频通话,互相分享自己观看到景象而发明。所述头戴设备,包括至少两个摄像头,所述至少两个摄像头相互间隔地布置在所述头部固定装置和/或所述视频输出单元上,所述至少两个摄像头的视角叠加覆盖水平方向的所有方位。优选实施例中,所述头戴设备还包括视频采集单元,所述视频采集单元由相邻的两个摄像头构成,所述相邻的两个摄像头按照人眼两个瞳孔间的距离同步拍摄同一场景,得到不同角度的左、右眼图像数据用于合成具有三维效果的立体图像。
Description
本发明涉及虚拟现实技术,尤其涉及一种头戴设备、三维视频通话系统和三维视频通话实现方法。
发明背景
虚拟现实技术就是采用以计算机技术为核心结合光电传感技术生成逼真的视、听、触一体化的特定范围内虚拟的环境(如飞机驾驶舱、分子结构世界,高危环境)。若使用特定装备(动作采集自由度空间定位、力反馈输入、数字头盔、立体显示环境等),就可以自然地与虚拟世界中的客体进行实时逼真交互,从而产生亲临现场的感受和体验。
近年来,随着虚拟现实技术的逐步成熟,各种虚拟现实设备陆续出现,如三维扫描仪、头戴显示设备。通过虚拟现实设备进行社交活动是虚拟现实设备的热门用处之一。
当进行社交活动的时候,部分用户会存在向与其交流的另一方展示自己所在环境场景的需求,比如让友人看看自己装修完的新家或是周围优美的风景,但是现有的设备并不能很好的满足这个需求。部分设备虽然安装有摄像头,但通常也只是安装一个摄像头,只能拍摄下部分场景,如果异地的用户想观察自己所处的场景时,只能跟随摄像头拍摄的位置来观察场景,而不能像全景视频那样可以自己选择观察的位置。
此外,随着移动宽带网络速度的升级,各种视频应用软件不断出现,其中远距离通信装置被广泛的使用。远距离通信装置的基本原理是:通过一端的摄像头和麦克风采集数据,经过网络发送模块发送数据,在另一端设备上显示图像和播放声音。但是,这类型的远距离通信装置只是平面展示,并不能够完美的3D呈现出发送端所看到的景象。
发明内容
有鉴于此,本发明实施例提供一种头戴设备、三维视频通话系统和三维视频通话实现方法,能够较为完整地拍摄使用者所处场景的视频图像,实现双方用户实时的3D视频通话,互相分享自己观看到景象。
为达到上述目的,本发明的实施例采用如下技术方案:
一方面,本发明实施例提供一种头戴设备,包括:
一种头戴设备,包括视频输出单元和头部固定装置;所述视频输出单元安装在所述头部固定装置上,所述头戴设备还包括至少两个摄像头,所述至少两个摄像头相互间隔地布置在所述头部固定装置和/或所述视频输出单元上,所述至少两个摄像头的视角叠加覆盖水平方向
的所有方位。
可选地,所述至少两个摄像头布置在同一圆周上。
可选地,所述至少两个摄像头等间距地布置在同一圆周上。
可选地,所述头戴设备通过所述头部固定装置固定在使用者的头部上时,所述至少两个摄像头处于同一水平面上。
可选地,所述头部固定装置包括:第一固定部件和第二固定部件;
所述第一固定部件和第二固定部件各自均包括一固定端和一自由端以及连接所述固定端和自由端的弧形部;
所述第一固定部件和第二固定部件围绕出一用于固定使用者头部的环形空间;
所述至少两个摄像头布置在所述第一固定部件和/或第二固定部件上、且拍摄方向朝向所述第一固定部件和/或第二固定部件的外侧。
可选地,所述的头戴设备,还包括与各摄像头相连的全景图像拼接装置。
可选地,所述的头戴设备,还包括与所述全景图像拼接装置相连的视频传送装置。
可选地,所述的头戴设备,还包括与各摄像头相连的环境障碍物检测装置。
可选地,所述的头戴设备,还包括与所述全景图像拼接装置相连的环境障碍物检测装置。
优选地,所述头戴设备还包括视频采集单元,所述视频采集单元由相邻的两个摄像头构成,所述相邻的两个摄像头按照人眼两个瞳孔间的距离同步拍摄同一场景,得到不同角度的左、右眼图像数据。
进一步优选地,所述头戴设备还包括音频采集单元、音视频编码单元、中央处理单元、音视频解码单元和音频输出单元;
所述音频采集单元包括至少一麦克风,用于同步拾取音频数据;
所述音视频编码单元,用于接收所述视频采集单元采集的左、右眼图像数据,和所述音频采集单元同步拾取的音频数据,并将所述左、右眼图像数据和所述音频数据编码成音视频数据发送给所述中央处理单元;
所述中央处理单元,用于接收所述音视频编码单元发送的音视频数据,并将该音视频数据发送给对端;以及用于接收对端发送的音视频数据,并将该对端发送的音视频数据发送给所述音视频解码单元;
所述音视频解码单元,用于将对端发送的音视频数据解码成相应的左、右眼图像数据和音频数据,并将解码后的左、右眼图像数据发送给所述视频输出单元进行3D视频显示,以及将解码后的音频数据发送给所述音频输出单元进行音频输出。
优选地,所述视频输出单元包括用于接收左眼图像的左显示屏和用于接收右眼图像的右
显示屏,所述左显示屏靠近人眼的一侧设置有左光学系统,所述右显示屏靠近人眼一侧设置有右光学系统,所述左、右光学系统,用于使人的左眼和右眼在透过相应的光学系统看到左显示屏和右显示屏时,合成具有三维效果的立体图像。
进一步优选地,所述头戴设备还包括光学系统控制旋钮,所述光学系统控制旋钮,用于调节所述左、右光学系统所形成的虚拟屏幕与人眼的距离,控制人眼观看到的显示内容的显示比例。
优选地,所述头戴设备还包括摄像头距离调节旋钮,所述摄像头距离调节旋钮用于微调所述两个摄像头之间的水平距离。
优选地,所述头戴设备还包括摄像头角度调节旋钮,所述摄像头角度调节旋钮用于调节每个摄像头的角度。
优选地,所述头戴设备还包括音视频切换单元,所述音视频切换单元,用于根据用户指令将所述中央处理单元向所述音视频解码单元发送的音视频数据切换成本端采集到的音视频数据或者恢复成对端发送的音视频数据。
优选地,所述音频采集单元包括两个或两个以上的麦克风,所述中央处理单元,还用于将所述两个或两个以上的麦克风拾取到的音频数据进行立体声处理。
另一方面,本发明提供了一种三维视频通话系统,该三维视频通话系统包括:网络服务器、至少两个上述技术方案提供的头戴设备;
任两个所述的头戴设备通过所述网络服务器建立连接,并通过所述网络服务器实现三维视频通话。
又一方面,本发明提供了一种三维视频通话实现方法,该方法利用上述技术方案提供的头戴设备,包括:
获取头戴设备上相邻的两个摄像头按照人眼两个瞳孔间的距离同步拍摄同一场景得到不同角度的左、右眼图像数据;以及获取头戴设备上至少一麦克风同步拾取的音频数据;
将所述左、右眼图像数据和所述音频数据数据编码成音视频数据;
将所述音视频数据发送给对端,并接收对端发送的音视频数据;
将对端发送的音视频数据解码成相应的左、右眼图像数据和音频数据,并将解码后的左、右眼图像数据发送给视频输出单元进行3D视频显示,以及将解码后的音频数据发送给音频输出单元进行音频输出。
优选地,所述将解码后的左、右眼图像数据发送给视频输出单元进行3D视频显示包括:
利用左显示屏接收左眼图像,右显示屏接收右眼图像;
利用设置在左显示屏靠近人眼的一侧的光学系统和设置在右显示屏靠近人眼一侧的右光
学系统,将左、右显示屏显示的图像合成具有三维效果的立体图像。
本发明实施例的有益效果是:一方面本发明实施例提供的头戴设备,由于包括至少两个相互间隔地布置在所述头部固定装置和/或所述视频输出单元上的摄像头,这样,能够从更多的方位、较为完整地拍摄使用者所处场景的视频图像。另一方面,本发明实施例提供的三维视频通话系统和三维视频通话实现方法,通过模拟人眼观察景物的过程,由相邻的两个摄像头构成头戴设备的视频采集单元,利用该相邻的两个摄像头采集具有左、右眼视差的左、右眼的图像数据,以及利用头戴设备的音频采集单元同步拾取音频数据,并将该左、右眼的图像数据和同步拾取的音频数据编码后发送给对端头戴设备进行3D显示,同时接收对端发送的音视频数据,解码后将获得的左、右眼图像数据发送给本端的视频输出单元进行3D视频显示,将获得的音频数据发送给本端的音频输出单元进行音频输出,从而实现双方用户实时的3D视频通话,互相分享自己观看到景象。
附图简要说明
为了更清楚地说明本发明实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本发明的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其它的附图。
图1为本发明的实施例头戴设备的结构示意图;
图2为本发明的实施例头戴设备中各摄像头的布置示意图;
图3为本发明实施例一提供的头戴设备的组成结构示意图;
图4为本发明实施例二提供的三维视频通话系统的组成结构示意图;
图5为本发明实施例三提供的三维视频通话实现方法的流程图。
为使本发明的目的、技术方案和优点更加清楚,下面将结合附图对本发明实施方式作进一步地详细描述。
应当明确,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其它实施例,都属于本发明保护的范围。
参看图1,本发明一种头戴设备实施例,包括视频输出单元1和头部固定装置2;所述视频输出单元1安装在所述头部固定装置2上,所述头戴设备还包括至少两个摄像头3,所述至少两个摄像头3相互间隔地布置在所述头部固定装置2和/或所述视频输出单元1上。
本发明实施例提供的头戴设备,由于包括至少两个相互间隔地布置在所述头部固定装置和/或所述视频输出单元上的摄像头,这样,能够从更多的方位、较为完整地拍摄使用者所处场景的视频图像。
在前述的头戴设备实施例中,所述视频输出单元1可采用现有的头戴设备中的视频输出单元,比如其可包括有关的显示元件和光学元件等。
在前述的头戴设备实施例中,所述头部固定装置2包括:第一固定部件21和第二固定部件22;所述第一固定部件21和第二固定部件22各自均包括一固定端和一自由端以及连接所述固定端和自由端的弧形部;所述第一固定部件和第二固定部件围绕出一用于固定使用者头部的环形空间;所述至少两个摄像头3布置在所述第一固定部件21和/或第二固定部件22上、且拍摄方向朝向所述第一固定部件和/或第二固定部件的外侧。
在前述的头戴设备实施例中,各所述摄像头3可采用现有技术中的摄像头。摄像头的数量可根据单个摄像头的拍摄范围即视角大小来确定,比如摄像头的视角(FOV,Field of Vision)大,那么间距就可以放大。如果需要得到头戴设备所处场景的全景视频图像,而摄像头的FOV可以达到180度,那么只需两个安装在相隔180度角的位置的摄像头即可。
优选地,所布置的各个摄像头所拍摄的场景组合在一起能够得到头戴设备所处场景的全景视频图像。
参看图2,作为一优选实施例,摄像头的数量为四个,相邻摄像头的拍摄范围刚好连接或者部分重合。
在前述的头戴设备实施例中,所述至少两个摄像头布置在同一圆周上。优选地,所述至少两个摄像头等间距地布置在同一圆周上。
为了获得如身临其境般的场景视频图像,所述头戴设备通过所述头部固定装置固定在使用者的头部上时,所述至少两个摄像头处于同一水平面上。
为了获得完整的场景视频图像所述的头戴设备,还可进一步包括与各摄像头相连的全景图像拼接装置。图像拼接(Image Stitching)是一种利用实景图像组成全景空间的技术,它将多幅图像拼接成一幅大尺度图像或360度全景图,图像拼接技术涉及到计算机视觉、计算机图形学、数字图像处理以及一些数学工具等技术。图像拼接其基本步骤主要包括以下几个方面:摄相机的标定、传感器图像畸变校正、图像的投影变换、匹配点选取、全景图像拼接(融合),以及亮度与颜色的均衡处理等。图像拼接的技术为现有技术,本发明在此不做过多阐述。
为了便于将所述全景图像拼接装置获得的全景视频分享给其他用户,所述的头戴设备,还可进一步包括与所述全景图像拼接装置相连的视频传送装置。这样,便于将该视频传输给异地的其它用户观看。
为了便于检测周围环境看是否存在障碍物,从而提醒用户避免危险,所述的头戴设备,还可进一步包括与各摄像头相连的环境障碍物检测装置。其中,通过图像进行障碍物检测可采用已有的技术。
作为一可选实施方式,所述环境障碍物检测装置也可以与所述全景图像拼接装置相连。这样可对得到的用户所处场景的实时全景视频,在本地通过周围场景的图像来检测周围的障碍物。
由于人的两眼之间大约有6cm的水平距离,所以在观察前方物体时,两只眼睛看到的景象必然有角度的不同,这个差别使得大脑能够自动形成上下、左右、前后的区别,从而产生立体视觉。本发明模拟人眼观察景物的过程,在上述头戴设备中选择相邻的两个摄像头构成视频采集单元,利用该相邻的两个摄像头采集具有左右眼视差的图像数据,将图像数据发送给头戴设备的视频输出单元进行3D显示。
实施例一:
图3为本实施例提供的头戴设备的组成结构示意图,图3中的头戴设备包括:视频采集单元10、音频采集单元11、音视频编码单元12、中央处理单元13、音视频解码单元14、视频输出单元15和音频输出单元16。
视频采集单元10由相邻的两个摄像头构成,该相邻的两个摄像头按照人眼两个瞳孔间的距离同步拍摄同一场景,得到不同角度的左、右眼图像数据。
音频采集单元11包括至少一麦克风,用于同步拾取音频数据。
音视频编码单元12,用于接收视频采集单元10采集的左、右眼图像数据,和音频采集单元11同步拾取的音频数据,并将左、右眼图像数据和音频数据数据编码成音视频数据发送给中央处理单元13。
中央处理单元13,用于接收音视频编码单元12发送的音视频数据,并将该音视频数据发送给对端;以及用于接收对端发送的音视频数据,并将该端发送的音视频数据发送给音视频解码单元14。具体的,中央处理单元13包括网络传输模块,利用网络传输模块将音视频编码单元12发送的音视频数据发送给对端,以及利用网络传输模块接收对端发送的编码的音视频数据。
音视频解码单元14,用于将对端发送的音视频数据解码成相应的左、右眼图像数据和音频数据,并将解码后的左、右眼图像数据发送给视频输出单元15进行3D视频显示,以及将解码后的音频数据发送给音频输出单元16进行音频输出。
本实施例的头戴设备,利用相邻的两个摄像头构成的视频采集单元,通过将相邻的两个摄像头采集的图像数据和音频采集单元同步采集的音频数据通过网络传输给对端,实现双方
实时的3D视频通话,互相分享自己观看到景象。
在本实施例的一个具体实现方案中,视频输出单元15包括用于接收左眼图像的左显示屏和用于接收右眼图像的右显示屏,左显示屏靠近人眼的一侧设置有左光学系统,右显示屏靠近人眼一侧设置有右光学系统,左、右光学系统,用于使人的左眼和右眼在透过相应的光学系统看到左显示屏和右显示屏时,合成具有三维效果的立体图像。
在该具体实现方案中,该头戴设备还包括光学系统控制旋钮,该光学系统控制旋钮,用于调节左、右光学系统所形成的虚拟屏幕与人眼的距离,控制人眼观看到的显示内容的显示比例。例如,当左、右光学系统所形成的虚拟屏幕与人眼处于最大距离位置时,设置本端视频输出单元等比显示对端用户发送的视频内容;当用户想更清楚地观看画面中某个物体时,基于近大远小原理,拉近左、右光学系统所形成的虚拟屏幕与人眼之间的距离。
在本实施例的另一具体实现方案中,头戴设备还包括音视频切换单元,用于根据用户指令将中央处理单元向音视频解码单元发送的音视频数据切换成本端采集到的音视频数据,或者恢复成对端发送的音视频数据。通过增加音视频切换单元可以使用户同步查看自己拍摄的内容,与对端用户观看同样的内容。
需要说明的是,由于个体的差异性,本实施例的头戴设备还包括摄像头距离调节旋钮和摄像头角度调节旋钮;其中,摄像头距离调节旋钮用于微调两个摄像头间的水平距离,以适应不同用户的瞳距,避免重影,提高观看效果。摄像头角度调节旋钮用于调节每个摄像头的角度,适应用户对景物不同角度的拍摄需求。
为了进一步提高观看效果,本实施例优选地,音频采集单元11包括两个或两个以上的麦克风。当音频采集单元11包括两个或两个以上的麦克风时,中央处理单元13,还用于将两个或两个以上的麦克风拾取到的音频数据进行立体声处理。
实施例二:
本实施例基于与实施例一相同的技术构思,提供了一种三维视频通话系统。
图4为本实施例提供的三维视频通话系统的组成结构示意图,图4中的三维视频通话系统包括:网络服务器40和至少两个上述实施例一提供的头戴设备41;
任两个的头戴设备41通过网络服务器40建立连接,并通过网络服务器40实现三维视频通话。
本实施例利用网络服务器建立头戴设备之间的连接,并通过网络服务器实现头戴设备间的数据传输,实现头戴设备间实时的3D视频通话,互相分享自己观看到景象。
实施例三:
本实施例基于与实施例一相同的技术构思,利用上述实施例一提供的头戴设备,提供了
一种三维视频通话实现方法。图5为本实施例提供的三维视频通话实现方法的流程图,如图5所示,本实施例三的方法包括:
S510,获取头戴设备上相邻的两个摄像头按照人眼两个瞳孔间的距离同步拍摄同一场景得到不同角度的左、右眼图像数据,以及获取头戴设备上至少一麦克风同步拾取的音频数据。
在实际应用中,在使用相邻的两个摄像头按照人眼两个瞳孔间的距离同步拍摄同一场景前,微调两个摄像头间的水平距离,使其适应于不同用户的瞳距。
S520,将左、右眼图像数据和音频数据数据编码成音视频数据。
S530,将音视频数据发送给对端,并接收对端发送的音视频数据。
S540,将对端发送的音视频数据解码成相应的左、右眼图像数据和音频数据,并将解码后的左、右眼图像数据发送给视频输出单元进行3D视频显示,以及将解码后的音频数据发送给音频输出单元进行音频输出。
本步骤中将解码后的左、右眼图像数据发送给视频输出单元进行3D视频显示包括:
利用左显示屏接收左眼图像,右显示屏接收右眼图像;
利用设置在左显示屏靠近人眼的一侧的光学系统和设置在右显示屏靠近人眼一侧的右光学系统,将左、右显示屏显示的图像合成具有三维效果的立体图像。
进一步地,上述方案还包括:调节左、右光学系统所形成的虚拟屏幕与人眼的距离,控制人眼观看到的显示内容以及所述显示内容的显示比例。以及根据用户指令将音视频数据源切换成本端采集到的音视频数据,或者恢复为对端发送的音视频数据。
本发明方法实施例中各步骤的具体执行方式,可以参见本发明头戴设备实施例的具体内容,在此不再赘述。
综上所述,一方面本发明实施例提供的头戴设备,由于包括至少两个相互间隔地布置在头部固定装置和/或视频输出单元上的摄像头,这样,能够从更多的方位、较为完整地拍摄使用者所处场景的视频图像。另一方面,本发明实施例提供的三维视频通话系统和三维视频通话实现方法,通过模拟人眼观察景物的过程,由相邻的两个摄像头构成头戴设备的视频采集单元,利用该相邻的两个摄像头采集具有左、右眼视差的左、右眼的图像数据,以及利用头戴设备的音频采集单元同步拾取音频数据,并将该左、右眼的图像数据和同步拾取的音频数据编码后发送给对端头戴设备进行3D显示,同时接收对端发送的音视频数据,解码后将获得的左、右眼图像数据发送给本端的视频输出单元进行3D视频显示,将获得的音频数据发送给本端的音频输出单元进行音频输出,从而实现双方用户实时的3D视频通话,互相分享自己观看到景象。
以上所述,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉
本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。
Claims (20)
- 一种头戴设备,包括视频输出单元和头部固定装置;所述视频输出单元安装在所述头部固定装置上,其特征在于,所述头戴设备还包括至少两个摄像头,所述至少两个摄像头相互间隔地布置在所述头部固定装置和/或所述视频输出单元上,所述至少两个摄像头的视角叠加覆盖水平方向的所有方位。
- 根据权利要求1所述的头戴设备,其特征在于,所述至少两个摄像头布置在同一圆周上。
- 根据权利要求2所述的头戴设备,其特征在于,所述至少两个摄像头等间距地布置在同一圆周上。
- 根据权利要求3所述的头戴设备,其特征在于,所述头戴设备通过所述头部固定装置固定在使用者的头部上时,所述至少两个摄像头处于同一水平面上。
- 根据权利要求1所述的头戴设备,其特征在于,所述头部固定装置包括:第一固定部件和第二固定部件;所述第一固定部件和第二固定部件各自均包括一固定端和一自由端以及连接所述固定端和自由端的弧形部;所述第一固定部件和第二固定部件围绕出一用于固定使用者头部的环形空间;所述至少两个摄像头布置在所述第一固定部件和/或第二固定部件上、且拍摄方向朝向所述第一固定部件和/或第二固定部件的外侧。
- 根据权利要求1所述的头戴设备,其特征在于,所述头戴设备还包括与各摄像头相连的全景图像拼接装置。
- 根据权利要求6所述的头戴设备,其特征在于,还包括与所述全景图像拼接装置相连的视频传送装置。
- 根据权利要求1所述的头戴设备,其特征在于,所述头戴设备还包括与各摄像头相连的环境障碍物检测装置。
- 根据权利要求6所述的头戴设备,其特征在于,还包括与所述全景图像拼接装置相连的环境障碍物检测装置。
- 根据权利要求1所述的头戴设备,其特征在于,所述头戴设备还包括视频采集单元,所述视频采集单元由相邻的两个摄像头构成,所述相邻的两个摄像头按照人眼两个瞳孔间的距离同步拍摄同一场景,得到不同角度的左、右眼图像数据。
- 根据权利要求10所述的头戴设备,其特征在于,所述头戴设备还包括音频采集单元、 音视频编码单元、中央处理单元、音视频解码单元和音频输出单元;所述音频采集单元包括至少一麦克风,用于同步拾取音频数据;所述音视频编码单元,用于接收所述视频采集单元采集的左、右眼图像数据,和所述音频采集单元同步拾取的音频数据,并将所述左、右眼图像数据和所述音频数据编码成音视频数据发送给所述中央处理单元;所述中央处理单元,用于接收所述音视频编码单元发送的音视频数据,并将该音视频数据发送给对端;以及用于接收对端发送的音视频数据,并将该对端发送的音视频数据发送给所述音视频解码单元;所述音视频解码单元,用于将对端发送的音视频数据解码成相应的左、右眼图像数据和音频数据,并将解码后的左、右眼图像数据发送给所述视频输出单元进行3D视频显示,以及将解码后的音频数据发送给所述音频输出单元进行音频输出。
- 根据权利要求11所述的头戴设备,其特征在于,所述视频输出单元包括用于接收左眼图像的左显示屏和用于接收右眼图像的右显示屏,所述左显示屏靠近人眼的一侧设置有左光学系统,所述右显示屏靠近人眼一侧设置有右光学系统,所述左、右光学系统,用于使人的左眼和右眼在透过相应的光学系统看到左显示屏和右显示屏时,合成具有三维效果的立体图像。
- 根据权利要求12所述的头戴设备,其特征在于,所述头戴设备还包括光学系统控制旋钮,所述光学系统控制旋钮,用于调节所述左、右光学系统所形成的虚拟屏幕与人眼的距离,控制人眼观看到的显示内容的显示比例。
- 根据权利要求11所述的头戴设备,其特征在于,所述头戴设备还包括摄像头距离调节旋钮,所述摄像头距离调节旋钮用于微调所述相邻的两个摄像头之间的水平距离。
- 根据权利要求11所述的头戴设备,其特征在于,所述头戴设备还包括摄像头角度调节旋钮,所述摄像头角度调节旋钮用于调节所述相邻的两个摄像头中每个摄像头的角度。
- 根据权利要求11所述的头戴设备,其特征在于,所述头戴设备还包括音视频切换单元,所述音视频切换单元,用于根据用户指令将所述中央处理单元向所述音视频解码单元发送的音视频数据切换成本端采集到的音视频数据或者恢复成对端发送的音视频数据。
- 根据权利要求11所述的头戴设备,其特征在于,所述音频采集单元包括两个或两个以上的麦克风;所述中央处理单元,还用于将所述两个或两个以上的麦克风拾取到的音频数据进行立体声处理。
- 一种三维视频通话系统,其特征在于,包括:网络服务器、至少两个如权利要求11 所述的头戴设备;任两个所述的头戴设备通过所述网络服务器建立连接,并通过所述网络服务器实现三维视频通话。
- 一种三维视频通话实现方法,所述方法利用如权利要求11所述的头戴设备,包括:获取所述头戴设备上相邻的两个摄像头按照人眼两个瞳孔间的距离同步拍摄同一场景得到不同角度的左、右眼图像数据;以及获取头戴设备上至少一麦克风同步拾取的音频数据;将所述左、右眼图像数据和所述音频数据数据编码成音视频数据;将所述音视频数据发送给对端,并接收对端发送的音视频数据;将对端发送的音视频数据解码成相应的左、右眼图像数据和音频数据,并将解码后的左、右眼图像数据发送给视频输出单元进行3D视频显示,以及将解码后的音频数据发送给音频输出单元进行音频输出。
- 根据权利要求19所述的方法,其特征在于,所述将解码后的左、右眼图像数据发送给视频输出单元进行3D视频显示包括:利用左显示屏接收左眼图像,右显示屏接收右眼图像;利用设置在左显示屏靠近人眼的一侧的光学系统和设置在右显示屏靠近人眼一侧的右光学系统,将左、右显示屏显示的图像合成具有三维效果的立体图像。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/115,834 US9979930B2 (en) | 2015-12-03 | 2016-07-18 | Head-wearable apparatus, 3D video call system and method for implementing 3D video call |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201520996561.1U CN205318020U (zh) | 2015-12-03 | 2015-12-03 | 一种头戴显示设备 |
CN201520996561.1 | 2015-12-03 | ||
CN201510907657.0 | 2015-12-09 | ||
CN201510907657.0A CN105516639A (zh) | 2015-12-09 | 2015-12-09 | 头戴设备、三维视频通话系统和三维视频通话实现方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017092369A1 true WO2017092369A1 (zh) | 2017-06-08 |
Family
ID=58796171
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/090294 WO2017092369A1 (zh) | 2015-12-03 | 2016-07-18 | 一种头戴设备、三维视频通话系统和三维视频通话实现方法 |
Country Status (2)
Country | Link |
---|---|
US (1) | US9979930B2 (zh) |
WO (1) | WO2017092369A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10182210B1 (en) * | 2016-12-15 | 2019-01-15 | Steelcase Inc. | Systems and methods for implementing augmented reality and/or virtual reality |
US10701342B2 (en) * | 2018-02-17 | 2020-06-30 | Varjo Technologies Oy | Imaging system and method for producing images using cameras and processor |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103731659A (zh) * | 2014-01-08 | 2014-04-16 | 百度在线网络技术(北京)有限公司 | 头戴式显示设备 |
CN104144335A (zh) * | 2014-07-09 | 2014-11-12 | 青岛歌尔声学科技有限公司 | 一种头戴式可视设备和视频系统 |
US20150062293A1 (en) * | 2013-09-02 | 2015-03-05 | Lg Electronics Inc. | Digital device and method of controlling therefor |
CN104898276A (zh) * | 2014-12-26 | 2015-09-09 | 成都理想境界科技有限公司 | 头戴式显示装置 |
CN105516639A (zh) * | 2015-12-09 | 2016-04-20 | 北京小鸟看看科技有限公司 | 头戴设备、三维视频通话系统和三维视频通话实现方法 |
CN205318020U (zh) * | 2015-12-03 | 2016-06-15 | 北京小鸟看看科技有限公司 | 一种头戴显示设备 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
RU2322771C2 (ru) * | 2005-04-25 | 2008-04-20 | Святослав Иванович АРСЕНИЧ | Стереопроекционная система |
US8817092B2 (en) * | 2008-11-25 | 2014-08-26 | Stuart Leslie Wilkinson | Method and apparatus for generating and viewing combined images |
US8259159B2 (en) * | 2009-01-12 | 2012-09-04 | Hu Chao | Integrative spectacle-shaped stereoscopic video multimedia device |
KR101011636B1 (ko) * | 2009-12-30 | 2011-01-31 | (주)파워텍일렉트로닉스 | 돔형 감시카메라 |
TWI628947B (zh) * | 2009-12-31 | 2018-07-01 | 江國慶 | 以遠端伺服器傳輸通訊電話簿所需之影像之方法 |
US8964004B2 (en) * | 2010-06-18 | 2015-02-24 | Amchael Visual Technology Corporation | Three channel reflector imaging system |
JPWO2013099290A1 (ja) * | 2011-12-28 | 2015-04-30 | パナソニック株式会社 | 映像再生装置、映像再生方法、映像再生プログラム、映像送信装置、映像送信方法及び映像送信プログラム |
US20130250040A1 (en) * | 2012-03-23 | 2013-09-26 | Broadcom Corporation | Capturing and Displaying Stereoscopic Panoramic Images |
US8757900B2 (en) * | 2012-08-28 | 2014-06-24 | Chapman/Leonard Studio Equipment, Inc. | Body-mounted camera crane |
CZ308335B6 (cs) * | 2012-08-29 | 2020-05-27 | Awe Spol. S R.O. | Způsob popisu bodů předmětů předmětového prostoru a zapojení k jeho provádění |
CN103888163A (zh) * | 2012-12-22 | 2014-06-25 | 华为技术有限公司 | 一种眼镜式通信装置、系统及方法 |
US9649558B2 (en) * | 2014-03-14 | 2017-05-16 | Sony Interactive Entertainment Inc. | Gaming device with rotatably placed cameras |
US20150362733A1 (en) * | 2014-06-13 | 2015-12-17 | Zambala Lllp | Wearable head-mounted display and camera system with multiple modes |
-
2016
- 2016-07-18 WO PCT/CN2016/090294 patent/WO2017092369A1/zh active Application Filing
- 2016-07-18 US US15/115,834 patent/US9979930B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150062293A1 (en) * | 2013-09-02 | 2015-03-05 | Lg Electronics Inc. | Digital device and method of controlling therefor |
CN103731659A (zh) * | 2014-01-08 | 2014-04-16 | 百度在线网络技术(北京)有限公司 | 头戴式显示设备 |
CN104144335A (zh) * | 2014-07-09 | 2014-11-12 | 青岛歌尔声学科技有限公司 | 一种头戴式可视设备和视频系统 |
CN104898276A (zh) * | 2014-12-26 | 2015-09-09 | 成都理想境界科技有限公司 | 头戴式显示装置 |
CN205318020U (zh) * | 2015-12-03 | 2016-06-15 | 北京小鸟看看科技有限公司 | 一种头戴显示设备 |
CN105516639A (zh) * | 2015-12-09 | 2016-04-20 | 北京小鸟看看科技有限公司 | 头戴设备、三维视频通话系统和三维视频通话实现方法 |
Also Published As
Publication number | Publication date |
---|---|
US9979930B2 (en) | 2018-05-22 |
US20170163932A1 (en) | 2017-06-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109952759B (zh) | 用于具有hmd的视频会议的改进的方法和系统 | |
US20150358539A1 (en) | Mobile Virtual Reality Camera, Method, And System | |
US11006072B2 (en) | Window system based on video communication | |
CN104902263A (zh) | 一种图像信息展现系统和方法 | |
TWI692976B (zh) | 視頻通信裝置及方法 | |
CN204681518U (zh) | 一种全景图像信息采集设备 | |
US10645340B2 (en) | Video communication device and method for video communication | |
US10972699B2 (en) | Video communication device and method for video communication | |
EP3465631B1 (en) | Capturing and rendering information involving a virtual environment | |
WO2017092369A1 (zh) | 一种头戴设备、三维视频通话系统和三维视频通话实现方法 | |
JP2005142765A (ja) | 撮像装置及び方法 | |
US10701313B2 (en) | Video communication device and method for video communication | |
KR100703713B1 (ko) | 3차원 영상 획득 및 디스플레이가 가능한 3차원 모바일 장치 | |
JP2016072844A (ja) | 映像システム | |
WO2019097639A1 (ja) | 情報処理装置および画像生成方法 | |
JP2005064681A (ja) | 撮像・表示装置、撮像・表示システム、映像生成方法、この方法のプログラム、およびこのプログラムを記録した記録媒体 | |
JPWO2019038885A1 (ja) | 情報処理装置および画像出力方法 | |
CN117121473A (zh) | 影像显示系统、信息处理装置、信息处理方法及程序 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 15115834 Country of ref document: US |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16869680 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16869680 Country of ref document: EP Kind code of ref document: A1 |