[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2018113759A1 - 基于定位系统和ar/mr的探测系统及探测方法 - Google Patents

基于定位系统和ar/mr的探测系统及探测方法 Download PDF

Info

Publication number
WO2018113759A1
WO2018113759A1 PCT/CN2017/117880 CN2017117880W WO2018113759A1 WO 2018113759 A1 WO2018113759 A1 WO 2018113759A1 CN 2017117880 W CN2017117880 W CN 2017117880W WO 2018113759 A1 WO2018113759 A1 WO 2018113759A1
Authority
WO
WIPO (PCT)
Prior art keywords
display device
preset
coordinates
positioning
virtual image
Prior art date
Application number
PCT/CN2017/117880
Other languages
English (en)
French (fr)
Inventor
李凯
潘杰
郑浩
Original Assignee
大辅科技(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大辅科技(北京)有限公司 filed Critical 大辅科技(北京)有限公司
Publication of WO2018113759A1 publication Critical patent/WO2018113759A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V9/00Prospecting or detecting by methods not provided for in groups G01V1/00 - G01V8/00
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V8/00Prospecting or detecting by optical means
    • G01V8/10Detecting, e.g. by using light barriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Definitions

  • the present invention relates to the field of detection, and more particularly to a detection system and a detection method based on a positioning system and an AR/MR technique.
  • Detection technology is a very traditional technology. Surveying, exploration, and surveying pipelines are all detected. Depending on the specific detection purpose, conventional methods include determining the position of an invisible object by receiving reflected waves using methods such as sonar, radar, infrared, and the like. It takes a long time for the average person to master these conventional methods, and these devices are relatively cumbersome and complex, and a more concise detection system that conforms to the habits of modern people is needed.
  • a positioning system and an AR/MR based detection system comprising:
  • AR/MR display device for displaying AR/MR images
  • a geolocation system for determining a geographic location of the AR/MR display device
  • An infrared laser positioning system for determining a 3D coordinate and a posture of the AR/MR display device in the determined area
  • the AR/MR detection system includes a database and a processing unit, and the database stores data of the object to be tested and a virtual image of the object to be tested in the area to be tested, and the processing unit is based on the geographic location, 3D coordinates, and the AR/VR display device.
  • the gesture superimposes the virtual image of the object to be tested on the real image displayed by the AR/MR display device.
  • the geolocation system comprises one or more of a GPS, GMS, LTE positioning system.
  • the geolocation system comprises a terrestrial base station and/or a terrestrial signal enhancement point.
  • a portion of the AR/MR detection system can be located in the cloud.
  • the AR/MR display device includes a head mounted display device, a smart phone, or a tablet computer.
  • the system further includes a detection device.
  • the detection device includes, but is not limited to, a sensor.
  • the detection device includes a capture device for collecting scene information.
  • the capture device includes, but is not limited to, a depth camera.
  • the AR/MR display device can receive interactive information of the user.
  • the present invention provides a positioning system and an AR/MR detecting method, including:
  • the preset virtual image is displayed in the AR/MR display device based on the positioning of the geolocation system and the infrared laser positioning system.
  • the virtual image has preset positioning coordinates.
  • the preset positioning coordinates of the virtual image include geographic coordinates.
  • the preset positioning coordinates of the virtual image include relative position coordinates.
  • the relative position coordinates include relative geographic coordinates and/or relative 3D coordinates.
  • the infrared laser positioning system is used to determine 3D coordinates and attitude of the AR/MR display device within the field region.
  • the AR/MR display device can be positioned using a relative position within a certain area, and the display coordinates of the virtual image can also be preset with the relative position.
  • the preset virtual image is displayed at the relative position.
  • the feature point in the real image is confirmed by image recognition, and the virtual image is displayed at a preset relative position with the feature point.
  • the geographic location of the AR/MR display device is determined according to the geolocation system, and when the AR/MR display device is in the preset geographic coordinate range, the virtual image is displayed at the preset relative position.
  • a non-transitory computer readable medium storing program instructions, which when executed by a processing device causes the apparatus to:
  • the preset virtual image is displayed in the AR/MR display device based on the positioning of the geolocation system and the infrared laser positioning system.
  • the virtual image has preset positioning coordinates.
  • the preset positioning coordinates of the virtual image include geographic coordinates.
  • the preset positioning coordinates of the virtual image include relative position coordinates.
  • the relative position coordinates include relative geographic coordinates and/or relative 3D coordinates.
  • the infrared laser positioning system is used to determine 3D coordinates and attitude of the AR/MR display device within the field region.
  • the AR/MR display device can be positioned using a relative position within a certain area, and the display coordinates of the virtual image can also be preset with the relative position.
  • the preset virtual image is displayed at the relative position.
  • the feature point in the real image is confirmed by image recognition, and the virtual image is displayed at a preset relative position with the feature point.
  • the geographic location of the AR/MR display device is determined according to the geolocation system, and when the AR/MR display device is in the preset geographic coordinate range, the virtual image is displayed at the preset relative position.
  • Figure 1 shows an example of a detection system based on a positioning system and an AR/MR
  • Figure 2 shows an example of a global positioning system used in the present invention
  • Figure 3 illustrates an embodiment of an AR/MR display device
  • Figure 4 illustrates an embodiment of a processing unit associated with an AR/MR display device
  • Figure 5 illustrates an embodiment of a computer system implementing a detection system of the present invention
  • Figure 6 is a flow chart showing the cooperation of the AR/MR detection system, the infrared laser positioning/scanning system, and the display device.
  • GPS or mobile communication signals are used as a global positioning system.
  • the signal can be enhanced by: setting a plurality of locators near the position to be detected for periodically transmitting the positioning signal to the surroundings, and the positioning signal coverage of the locator is used as the positioning of the locator. region.
  • the locator periodically transmits a spherical low frequency electromagnetic field to the surroundings (the coverage radius is determined by the corresponding environment and the transmission power).
  • the positioning tag is mainly used for positioning of the positioned object, and its function is to receive the low frequency magnetic field signal emitted by the positioner and the ID number of the positioner from which the positioner is resolved.
  • One or more positioning communication base stations for implementing wireless signal coverage for the location areas of all the locators, and locating the ID number of the communication base station, the received ID number of the locator, the ID number of the positioning tag, and the positioning time (The positioning communication base station transmits the time when the positioning tag transmission signal is received as the positioning time to the positioning engine server.
  • the positioning engine server is connected to the positioning communication base station via the Ethernet, and receives the ID number of the positioning communication base station, the ID number of the locator, the ID number of the positioning tag, and the positioning time, and obtains a movement trajectory of the positioning tag after processing (that is, the positioning engine) Locate the movement track of the tag carrier).
  • a system for implementing a mixed reality environment in the present invention can include a mobile display device in communication with a hub computing system.
  • the mobile display device can include a mobile processing unit coupled to a head mounted display device (or other suitable device).
  • the head mounted display device can include a display element.
  • the display element is transparent to a degree such that a user can see a real world object within the user's field of view (FOV) through the display element.
  • the display element also provides the ability to project a virtual image into the user's FOV such that the virtual image can also appear next to a real world object.
  • the system automatically tracks where the user is looking so that the system can determine where to insert the virtual image into the user's FOV. Once the system knows where to project the virtual image, the display element is used to project the image.
  • the hub computing system and one or more processing units may cooperate to construct a model of an environment including x, y, z Cartesian locations for all users in a room or other environment, real world objects, and virtual three dimensional objects.
  • the location of each head mounted display device worn by a user in the environment can be calibrated to the model of the environment and calibrated to each other. This allows the system to determine the line of sight of each user and the FOV of the environment.
  • a virtual image can be displayed to each user, but the system determines the display of the virtual image from the perspective of each user, thereby adjusting the virtual image for any parallax and occlusion from or due to other objects in the environment.
  • the model of the environment (referred to herein as a scene graph) and the tracking of the user's FOV and objects in the environment may be generated by a hub or mobile processing unit that works in concert or independently.
  • interaction encompasses both physical and linguistic interactions of a user with a virtual object.
  • a user simply looking at a virtual object is another example of a user's physical interaction with a virtual object.
  • the head mounted display device 2 can include an integrated processing unit 4.
  • the processing unit 4 can be separate from the head mounted display device 2 and can communicate with the head mounted display device 2 via wired or wireless communication.
  • the eyeglass-shaped head mounted display device 2 is worn on the user's head so that the user can view through the display and thus have an actual direct view of the space in front of the user.
  • actual direct view is used to refer to the ability to see a real world object directly with the human eye, rather than seeing the created image representation of the object. For example, viewing a room through glasses allows the user to get an actual direct view of the room, while watching a video on a television is not an actual direct view of the room. More details of the head mounted display device 2 are provided below.
  • the processing unit 4 may include many of the computing powers for operating the head mounted display device 2.
  • processing unit 4 communicates wirelessly (eg, WiFi, Bluetooth, infrared, or other wireless communication means) with one or more hub computing systems 12.
  • the hub computing system 12 can be provided remotely at the processing unit 4 such that the hub computing system 12 and the processing unit 4 communicate via a wireless network, such as a LAN or WAN.
  • hub computing system 12 may be omitted to provide a mobile mixed reality experience using head mounted display device 2 and processing unit 4.
  • the hub computing system 12 can be a computer, gaming system or console, and the like.
  • hub computing system 12 may include hardware components and/or software components such that hub computing system 12 may be used to execute applications such as gaming applications, non-gaming applications, and the like.
  • the hub computing system can include processors such as standardized processors, special purpose processors, microprocessors, and the like, which can execute instructions stored on the processor readable storage device to perform the purposes of this document Said process.
  • the hub computing system 12 further includes a capture device for capturing image data from portions of the scene within its FOV.
  • a scene is an environment in which a user moves around, this environment being captured within the FOV of the capture device and/or within the FOV of each head mounted display device 2.
  • the capture device 20 can include one or more cameras that visually monitor the user 18 and surrounding space such that the gestures and/or movements performed by the user and the structure of the surrounding space can be captured, analyzed, and executed within the application.
  • the hub computing system 12 can be connected to an audiovisual device 16 such as a television, monitor, high definition television (HDTV), etc. that can provide gaming or application vision.
  • the audiovisual device 16 includes a built-in speaker.
  • the audiovisual device 16 and the hub computing system 12 can be connected to the external speaker 22.
  • FIG. 1 illustrates an example of a plant 23 or a user's hand 23 as a real world object appearing within a user's FOV.
  • Control circuitry 136 provides various electronic devices that support other components of head mounted display device 2. More details of control circuit 136 are provided below with reference to FIG. Inside the temple 102 or mounted to the temple 102 are an earpiece 130, an inertial measurement unit 132, and a temperature sensor 138.
  • inertial measurement unit 132 (or IMU 132) includes inertial sensors, such as a three-axis magnetometer 132A, a three-axis gyroscope 132B, and a three-axis accelerometer 132C.
  • the inertial measurement unit 132 senses the position, orientation, and sudden acceleration (pitch, roll, and yaw) of the head mounted display device 2.
  • IMU 132 may also include other inertial sensors.
  • Microdisplay 120 projects an image through lens 122.
  • image generation techniques that can be used to implement the microdisplay 120.
  • the microdisplay 120 can be implemented using a transmissive projection technique in which the light source is modulated by an optically active material and illuminated from behind with white light. These techniques are typically implemented using LCD type displays with powerful backlighting and high optical energy density.
  • Microdisplay 120 can also be implemented using a reflective technique in which external light is reflected and modulated by an optically active material. Depending on the technology, the illumination is illuminated forward by a white light source or RGB source.
  • microdisplay 120 can be implemented using a transmission technique in which light is generated by the display.
  • the PicoP (TM) display engine from Microvision, Inc. uses a miniature mirrored rudder to emit a laser signal onto a small screen that acts as a transmissive element or directly emits a beam of light (eg, a laser) to the eye.
  • FIG. 3 is a block diagram depicting various components of the head mounted display device 2.
  • FIG. 4 is a block diagram depicting various components of processing unit 4.
  • a head mounted display device 2, the components of which are depicted in Figure 4 is used to provide a mixed reality experience to a user by seamlessly blending one or more virtual images with a user's view of the real world. Additionally, the head mounted display device assembly of Figure 4 includes a number of sensors that track various conditions.
  • the head mounted display device 2 will receive an instruction for the virtual image from the processing unit 4 and will provide the sensor information back to the processing unit 4.
  • Processing unit 4, the components of which are depicted in FIG. 4 will receive sensory information from head mounted display device 2 and will exchange information and data with hub computing device 12. Based on the exchange of this information and data, processing unit 4 will determine where and when to provide a virtual image to the user and accordingly send instructions to the head mounted display device of FIG.
  • control circuit 200 all components of control circuit 200 are in communication with one another via dedicated lines or one or more buses. In another embodiment, each component of control circuit 200 is in communication with processor 210.
  • Camera interface 216 provides an interface to two room-facing cameras 112 and stores images received from cameras facing the room in camera buffer 218.
  • Display driver 220 will drive microdisplay 120.
  • the display formatter 222 provides information about the virtual image being displayed on the microdisplay 120 to the opacity control circuit 224 that controls the opacity filter 114.
  • Timing generator 226 is used to provide timing data to the system.
  • Display output interface 228 is a buffer for providing images from camera 112 towards the room to processing unit 4.
  • the display input interface 230 is a buffer for receiving an image such as a virtual image to be displayed on the microdisplay 120.
  • Display output interface 228 and display input interface 230 are in communication with a tape interface 232 that is an interface to processing unit 4.
  • the power management circuit 202 includes a voltage regulator 234, an eye tracking illumination driver 236, an audio DAC and amplifier 238, a microphone preamplifier and audio ADC 240, a temperature sensor interface 242, and a clock generator 244.
  • the voltage regulator 234 receives power from the processing unit 4 via the strap interface 232 and provides this power to other components of the head mounted display device 2.
  • Each eye tracking illumination driver 236 provides an IR source for the eye tracking illumination 134A as described above.
  • the audio DAC and amplifier 238 output audio information to the headphones 130.
  • the mic preamplifier and audio ADC 240 provide an interface for the microphone 110.
  • Temperature sensor interface 242 is an interface for temperature sensor 138.
  • the power management circuit 202 also provides power to and receives data from the three-axis magnetometer 132A, the three-axis gyroscope 132B, and the three-axis accelerometer 132C.
  • FIG. 4 is a block diagram depicting various components of processing unit 4.
  • FIG. 4 shows control circuit 304 in communication with power management circuit 306.
  • the control circuit 304 includes a central processing unit (CPU) 320, a graphics processing unit (GPU) 322, a cache 324, a RAM 326, a memory controller 328 in communication with the memory 330 (eg, D-RAM), and a flash memory 334 (or other Type of non-volatile storage) flash controller 332 for communication, display output buffer 336 for communicating with head mounted display device 2 via tape interface 302 and tape interface 232, pass band interface 302 and tape interface 232 and header a display input buffer 338 for communicating with the wearable display device 2, a microphone interface 340 for communicating with an external microphone connector 342 for connection to a microphone, a PCIexpress interface for connecting to the wireless communication device 346, and (one or more ) USB port 348.
  • CPU central processing unit
  • GPU graphics processing unit
  • RAM random access memory
  • 328 random access memory
  • wireless communication device 346 can include a Wi-Fi enabled communication device, a Bluetooth communication device, an infrared communication device, and the like.
  • a USB port can be used to interface processing unit 4 to hub computing system 12 to load data or software onto processing unit 4 and to charge processing unit 4.
  • CPU 320 and GPU 322 are the primary forces used to determine where, when, and how to insert a virtual three-dimensional object into the user's field of view. More details are provided below.
  • the power management circuit 306 includes a clock generator 360, an analog to digital converter 362, a battery charger 364, a voltage regulator 366, a head mounted display power supply 376, and a temperature sensor interface 372 in communication with the temperature sensor 374 (which may be located in processing) On the wristband of unit 4).
  • Analog to digital converter 362 is used to monitor battery voltage, temperature sensors, and control battery charging functions.
  • Voltage regulator 366 is in communication with battery 368 for providing electrical energy to the system.
  • Battery charger 364 is used to charge battery 368 upon receipt of electrical energy from charging jack 370 (via voltage regulator 366).
  • the HMD power supply 376 provides power to the head mounted display device 2.
  • Camera component 423 can include an infrared (IR) light component 425, a three-dimensional (3D) camera 426, and an RGB (visual image) camera 428 that can be used to capture depth images of a scene.
  • IR infrared
  • 3D three-dimensional
  • RGB visual image
  • the IR light component 425 of the capture device 20 can emit infrared light onto the scene, and then a sensor (including a sensor not shown in some embodiments) can be used, for example using a 3-D camera 426 and/or RGB camera 428 to detect backscattered light from the surface of one or more targets and objects in the scene.
  • capture device 20 may further include a processor 432 that is communicable with image camera component 423.
  • processor 432 can include standard processors, special purpose processors, microprocessors, and the like of executable instructions, including, for example, for receiving depth images, generating suitable data formats (eg, frames), and transmitting data to a hub computing system 12 instructions.
  • Capture device 20 may further include a memory 434 that may store instructions executed by processor 432, images or image frames captured by a 3-D camera and/or RGB camera, or any other suitable information, images, and the like.
  • memory 434 may include random access memory (RAM), read only memory (ROM), cache, flash memory, a hard disk, or any other suitable storage component.
  • RAM random access memory
  • ROM read only memory
  • cache flash memory
  • hard disk or any other suitable storage component.
  • memory 434 can be a separate component in communication with image camera component 423 and processor 432.
  • memory 434 can be integrated into processor 432 and/or image capture component 423.
  • Capture device 20 is in communication with hub computing system 12 via communication link 436.
  • Communication link 436 may be a wired connection including, for example, a USB connection, a FireWire connection, an Ethernet cable connection, and/or a wireless connection such as a wireless 802.11b, 802.11g, 802.11a, or 802.11n connection.
  • hub computing system 12 may provide capture device 20 via communication link 436 with a clock that may be used to determine when to capture, for example, a scene.
  • capture device 20 provides depth information and visual (eg, RGB) images captured by, for example, 3-D camera 426 and/or RGB camera 428 to hub computing system 12 via communication link 436.
  • RGB depth information and visual
  • the depth image and the visual image are transmitted at a rate of 30 frames per second; however, other frame rates may be used.
  • the hub computing system 12 can then create models and use the models, depth information, and captured images to, for example, control applications such as games or word processing programs and/or animate avatars or on-screen characters.
  • the hub computing system 12 described above, together with the head mounted display device 2 and the processing unit 4, is capable of inserting a virtual three-dimensional object into the FOV of one or more users such that the virtual three-dimensional object expands and/or replaces the view of the real world.
  • the head mounted display device 2, the processing unit 4, and the hub computing system 12 work together because each of these devices is included for obtaining to determine where, when, and how to insert the virtual three dimensional A subset of the sensor's data.
  • the calculation of where, when, and how to insert the virtual three-dimensional object is performed by the hub computing system 12 and processing unit 4 that work in cooperation with each other. However, in still other embodiments, all calculations may be performed by the separately functioning hub computing system 12 or the processing unit(s) operating separately. In other embodiments, at least some of the calculations may be performed by the head mounted display device 2.
  • the hub 12 may further include a skeletal tracking module 450 for identifying and tracking users within another user's FOV.
  • the pivot 12 can further include a gesture recognition engine 454 for identifying gestures performed by the user.
  • hub computing device 12 and processing unit 4 work together to create a scene graph or model of the environment in which the one or more users are located, as well as to track various moving objects in the environment.
  • hub computing system 12 and/or processing unit 4 tracks the FOV of head mounted display device 2 by tracking the position and orientation of head mounted display device 2 worn by user 18.
  • the sensor information obtained by the head mounted display device 2 is transmitted to the processing unit 4.
  • this information is communicated to the hub computing system 12, which updates the scene model and transmits it back to the processing unit.
  • Processing unit 4 uses the additional sensor information it receives from head mounted display device 2 to refine the user's FOV and provide instructions to head mounted display device 2 as to where, when, and how to insert the virtual object.
  • the scene model and periodically can be updated between the hub computing system 12 and the processing unit 4 in a closed loop feedback system Track the information as explained below.
  • FIG. 5 illustrates an example embodiment of a computing system that can be used to implement hub computing system 12.
  • the multimedia console 500 has a central processing unit (CPU) 501 having a level one cache 502, a level two cache 504, and a flash ROM (read only memory) 506.
  • the level one cache 502 and the level two cache 504 temporarily store data, and thus reduce the number of memory access cycles, thereby improving processing speed and throughput.
  • the CPU 501 can be equipped with more than one core and thus with additional primary and secondary caches 502 and 504.
  • the flash ROM 506 can store executable code that is loaded during the initialization phase of the boot process when the multimedia console 500 is powered on.
  • a graphics processing unit (GPU) 508 and a video encoder/video codec (encoder/decoder) 514 form a video processing pipeline for high speed and high resolution graphics processing.
  • Data is transferred from graphics processing unit 508 to video encoder/video codec 514 via a bus.
  • the video processing pipeline outputs data to an A/V (audio/video) port 540 for transmission to a television or other display.
  • Memory controller 510 is coupled to GPU 508 to facilitate processor access to various types of memory 512 such as, but not limited to, RAM (Random Access Memory).
  • the multimedia console 500 includes an I/O controller 520, a system management controller 522, an audio processing unit 523, a network interface 524, a first USB host controller 526, a second USB controller 528, and preferably implemented on the module 518, and Front panel I/O sub-assembly 530.
  • USB controllers 526 and 528 serve as hosts for peripheral controllers 542(1)-542(2), wireless adapters 548, and external memory devices 546 (eg, flash memory, external CD/DVDROM drives, removable media, etc.) .
  • Network interface 524 and/or wireless adapter 548 provides access to a network (eg, the Internet, a home network, etc.) and may be in a variety of different wired or wireless adapter components including Ethernet cards, modems, Bluetooth modules, cable modems, and the like. Any of them.
  • a network eg, the Internet, a home network, etc.
  • wired or wireless adapter components including Ethernet cards, modems, Bluetooth modules, cable modems, and the like. Any of them.
  • System memory 543 is provided to store application data that is loaded during the boot process.
  • a media drive 544 is provided and may include a DVD/CD drive, a Blu-ray drive, a hard drive, or other removable media drive or the like.
  • Media drive 544 can be located internal or external to multimedia console 500.
  • Application data may be accessed via media drive 544 for execution, playback, etc. by multimedia console 500.
  • the media drive 544 is connected to the I/O controller 520 via a bus such as a Serial ATA bus or other high speed connection (eg, IEEE 1394).
  • the system management controller 522 provides various service functions related to ensuring the availability of the multimedia console 500.
  • Audio processing unit 523 and audio codec 532 form a corresponding audio processing pipeline with high fidelity and stereo processing. Audio data is transmitted between the audio processing unit 523 and the audio codec 532 via a communication link.
  • the audio processing pipeline outputs the data to the A/V port 540 for reproduction by an external audio user or an audio capable device.
  • the front panel I/O sub-assembly 530 supports the functions of the power button 550 and the eject button 552 exposed on the outer surface of the multimedia console 500, as well as any LEDs (light emitting diodes) or other indicators.
  • System power supply module 536 provides power to the components of multimedia console 500.
  • Fan 538 cools the circuitry within multimedia console 500.
  • CPU 501, GPU 508, memory controller 510, and various other components within multimedia console 500 are interconnected via one or more buses, including serial and parallel buses, memory buses, peripheral buses, and using various bus architectures. Any of a variety of processors or local buses. As an example, these architectures may include a Peripheral Component Interconnect (PCI) bus, a PCI-Express bus, and the like.
  • PCI Peripheral Component Interconnect
  • application data can be loaded from the system memory 543 into the memory 512 and/or the caches 502, 504 and executed on the CPU 501.
  • the application can present a graphical user interface that provides a consistent user experience when navigating to different media types available on the multimedia console 500.
  • applications and/or other media contained in media drive 544 can be launched or played from media drive 544 to provide additional functionality to multimedia console 500.
  • the multimedia console 500 can operate as a stand-alone system by simply connecting the system to a television or other display.
  • Capture device 20 may define additional input devices for console 500 via USB controller 526 or other interface.
  • hub computing system 12 can be implemented using other hardware architectures. No hardware architecture is required.
  • Head mounted display device 2 and processing unit 4 are in communication with a hub computing system 12 (also referred to as hub 12).
  • hub computing system 12 also referred to as hub 12
  • Each of the mobile display devices can communicate with the hub using wireless communication as described above. It is contemplated in such an embodiment that much of the information in the information for the mobile display device will be calculated and stored at the hub and transmitted to each mobile display device.
  • the hub will generate a model of the environment and provide that model to all mobile display devices in communication with the hub. Additionally, the hub can track the position and orientation of the mobile display device as well as the moving objects in the room, and then transmit this information to each mobile display device.
  • the system can include a plurality of hubs 12, each of which includes one or more mobile display devices.
  • the hubs can communicate directly with each other or via the Internet (or other network).
  • the hub 12 can be omitted altogether.
  • All of the functions performed by the hub 12 in the following description may alternatively be performed by one of the processing units 4, some of the processing units 4 that work cooperatively, or all of the processing units 4 that work cooperatively. carried out.
  • the respective mobile display device 2 performs all functions of the system 10, including generating and updating status data, scene graphs, views of each user on the scene graph, all texture and rendering information, video and audio data. And other information for performing the operations described herein.
  • the hub 12 and processing unit 4 collect data from the scene.
  • this may be image and audio data sensed by depth camera 426 and RGB camera 428 of capture device 20.
  • this may be image data sensed by head mounted display device 2 at step 656, and in particular, image data sensed by camera 112, eye tracking component 134, and IMU 132.
  • the data collected by the head mounted display device 2 is sent to the processing unit 4.
  • Processing unit 4 processes this data in step 630 and sends it to hub 12.
  • the hub 12 performs various setup steps that allow the hub 12 to coordinate image data of its capture device 20 and one or more processing units 4.
  • the camera on the head mounted display device 2 is also moved around in the scene.
  • the position and time capture of each of the imaging cameras needs to be calibrated to the scene, calibrated to each other, and calibrated to the hub 12.
  • the clock offsets of the various imaging devices in system 10 are first determined. In particular, to coordinate image data from each of the cameras in the system, it can be confirmed that the coordinated image data is from the same time.
  • image data from capture device 20 and image data incoming from one or more processing units 4 are time stamped with a single master clock in hub 12. Using time stamps for all such data for a given frame, and using the known resolution of each camera in the camera, the hub 12 determines the time offset of each of the imaging cameras in the system. Accordingly, the hub 12 can determine the differences between the images received from each camera and the adjustments to those images.
  • the hub 12 can select a reference timestamp from the frames received by one of the cameras. The hub 12 can then add time to or subtract time from the image data received from all other cameras to synchronize with the reference timestamp. It is understood that for the calibration process, various other operations can be used to determine the time offset and/or to synchronize different cameras together. The determination of the time offset can be performed once when image data from all cameras is initially received. Alternatively, it may be performed periodically, such as for example every frame or a certain number of frames.
  • the hub 12 and/or one or more processing units 4 can form a scene map or model that identifies the geometry of the scene and the geometry and location of objects (including users) within the scene. Depth and/or RGB data can be used when calibrating image data of all cameras to each other.
  • the hub 12 can then convert the distortion corrected image data points captured by each camera from a camera view to an orthogonal 3D world view.
  • This orthogonal 3D world view is a point cloud map of all image data captured by the capture device 20 and the head mounted display device camera in an orthogonal x, y, z Cartesian coordinate system. Matrix transformation formulas for converting camera views into orthogonal 3D world views are known.
  • a preset virtual image is displayed in the AR/MR display device based on the positioning of the geolocation system and the infrared laser positioning system.
  • the virtual image has preset positioning coordinates, including but not limited to geographic coordinates, 3D coordinates, or relative coordinates.
  • the relative coordinates include relative geographic coordinates and/or relative 3D coordinates.
  • the virtual image allows the user to see the object to be detected or to be built in the AR/MR display device.
  • a deep underground pipeline or a location to be perforated Therefore, the setting of the virtual image coordinates is closely related to the positioning system.
  • the geographic coordinates of the virtual image can be set.
  • the AR/MR display device is within the preset geographic coordinate range, the corresponding position in the field of view appears according to the 3D coordinates and posture of the AR/MR display device.
  • the relative coordinates can also be set.
  • a certain marker can be used as a feature point to preset the object to be detected or the object to be constructed relative to the feature.
  • the relative coordinates of the point for example, 2 m from the right side of the manhole cover, 1 m deep; or in the middle part of a wooden sign, 20 cm from the upper and lower edges, and so on.
  • Multiple markers can be used as feature points for positioning, which is more accurate.
  • the feature point is confirmed by image recognition, and the preset virtual image is superimposed on the real image.
  • the position of the virtual image is adjusted in real time with the position and posture of the AR/MR display device, and the user can see the virtual image corresponding to the real image.
  • geographic coordinates to preset virtual images is relatively straightforward, but its accuracy depends on the accuracy of the geolocation system used and the surrounding environmental conditions.
  • the position of the virtual image is preset by means of geographic coordinates and relative position addition. Those skilled in the art can arbitrarily choose according to the needs and budget of the actual application.
  • virtual images may be added remotely through the cloud.
  • the capture device for example, the infrared laser scanning system
  • the scene information of the environment in which the user (the default and the position of the AR/MR display device are consistent) is obtained, and the coordinates of the preset virtual image in the scene are added by the remote according to the newly acquired information.
  • the capture device for example, the infrared laser scanning system
  • the scene information of the environment in which the user (the default and the position of the AR/MR display device are consistent) is obtained
  • the coordinates of the preset virtual image in the scene are added by the remote according to the newly acquired information.
  • the capture device for example, the infrared laser scanning system
  • the capture device for example, the infrared laser scanning system
  • the scene information of the environment in which the user the default and the position of the AR/MR display device are consistent
  • the coordinates of the preset virtual image in the scene are added by the remote according to the newly acquired information.
  • the capture device for example, the infrared laser scanning system
  • An infrared positioner is provided at several fixed locations near the detector to receive infrared laser signals.
  • the database of the AR/MR detection system is formed by using the GPS positioning technology provided by Trimble and the 3D modeling technology of Google Project Tango and the GIS data of the pipeline in the database.
  • a virtual pipeline image is displayed on the display screen.
  • the position of the virtual pipeline image in the display screen needs to match the GIS data corresponding to the FOV.
  • the explorer can easily identify the location of hidden pipelines and construct or detect them at the appropriate locations.
  • a geolocation device is disposed near the area to be tested, and receives a mobile communication signal for positioning.
  • the database of the AR/MR detection system is formed by the 3D modeling technology of Google Project Tango technology, the data of the mobile communication base station and the GIS data of the pipeline in the database.
  • a virtual pipeline image is displayed on the display screen based on the GPS data of the AR/MR display device and the position sensor data of the smartphone.
  • the position of the virtual pipeline image in the display screen must match the GIS data corresponding to the location of the smartphone.
  • An infrared positioner is provided at several fixed locations near the constructor to receive infrared laser signals.
  • the GPS site positioning technology provided by Trimble and the 3D modeling technology of Google Project Tango are used to model the construction site, and the virtual images of the targets to be constructed are set at corresponding positions, and the database of the AR/MR construction system is superimposed.
  • a virtual image of the target to be constructed is displayed on the display screen.
  • the position of the image in the FOV needs to match the position where the actual construction is required.
  • the explorer can see the location to be constructed through Microsoft Hololens, such as the location of the hole, the location of the item, and so on. This eliminates the need for measurement and is suitable for construction sites that cannot be completely replaced by machines.
  • embodiments of the present disclosure may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the present disclosure may also be implemented as instructions stored on a machine-readable medium, which may be read and executed by one or more processors.
  • a machine-readable medium can include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device).
  • a machine-readable medium can include read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustic, or other forms of propagated signals (eg, carrier, infrared) Signals, digital signals, etc.) and others.
  • firmware, software, routines, instructions may be described herein as performing certain actions. However, it should be appreciated that such descriptions are for convenience only, and that such acts actually result from a computing device, processor, controller, or other device executing firmware, software, routines, instructions, and the like.
  • references to "an embodiment”, “an embodiment”, “an example embodiment” or a similar phrase means that the described embodiments may include specific features, structures or characteristics, but each embodiment may not necessarily include a particular Feature, structure or characteristic. In addition, these phrases are not necessarily referring to the same embodiments. In addition, it is within the knowledge of a person skilled in the relevant art to incorporate the features, structures, or characteristics into other embodiments, whether or not they are specifically described or described herein.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Life Sciences & Earth Sciences (AREA)
  • Geophysics (AREA)
  • Computer Graphics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种基于定位系统和AR/MR的探测系统,该探测系统包括:AR/MR显示装置,用于显示AR/MR影像;地理定位系统,用于确定AR/MR显示装置的地理位置;红外激光定位系统,用于确定AR/MR显示装置在确定区域内的3D坐标和姿态;AR/MR探测系统,包括数据库和处理单元,数据库存储待测区域中待测物的数据及待测物的虚拟影像,处理单元根据AR/VR显示装置的地理位置、3D坐标和姿态将待测物的虚拟影像叠加在AR/MR显示装置所显示的现实影像上。

Description

基于定位系统和AR/MR的探测系统及探测方法 技术领域
本发明涉及探测领域,更具体地本发明涉及基于定位系统和AR/MR技术的探测系统及探测方法。
背景技术
探测技术是一门非常传统的技术。测绘、勘探、勘察管路均属于探测。根据具体的探测目的,常规的方法包括使用声纳、雷达、红外等方法通过接收反射波确定不可见物体的位置。一般人需要相当长的时间才能掌握这些常规的方法,而且这些设备相对笨重复杂,亟待一种更加简洁的、符合现代人使用习惯的探测系统。
发明内容
有鉴于此,本发明人希望把最新的MR/AR技术应用于该领域。
根据本发明的第一方面,本发明提供一种基于定位系统和AR/MR的探测系统,其包括:
AR/MR显示装置,用于显示AR/MR影像;
地理定位系统,用于确定所述AR/MR显示装置的地理位置;
红外激光定位系统,用于确定所述AR/MR显示装置在确定区域内的3D坐标和姿态;
AR/MR探测系统,包括数据库和处理单元,所述数据库存储待测区域中待测物的数据及待测物的虚拟影像,所述处理单元根据AR/VR显示装置的地理位置、3D坐标和姿态将待测物的虚拟影像叠加在AR/MR显示装置所显示的现实影像。
优选地,所述地理定位系统包括GPS、GMS、LTE定位系统中的一种或多种。
在本发明的一些实施方式中,所述地理定位系统包括地面基站和/或地面信号增强点。
优选地,所述AR/MR探测系统的一部分可以位于云端。
所述AR/MR显示装置包括头戴式显示设备、智能手机、或平板电脑。
在本发明的一些实施方式中,所述系统还包括探测装置。所述探测装置包括但不限于传感器。
在本发明的一些实施方式中,所述探测装置包括捕捉设备,所述捕捉设备用于采集场景信息。所述捕捉设备包括但不限于深度照相机。
优选地,所述AR/MR显示装置可接收使用者的互动信息。
根据本发明的第二方面,本发明提供一种基于定位系统和AR/MR的探测方法,其包括:
基于地理定位系统和红外激光定位系统的定位,在AR/MR显示装置中显示预设虚拟影像。
优选地,所述虚拟影像具有预设的定位坐标。
优选地,所述虚拟影像的预设定位坐标包括地理坐标。
优选地,所述虚拟影像的预设定位坐标包括相对位置坐标。所述相对位置坐标包括相对地理坐标和/或相对3D坐标。
本发明的一些实施方式中,所述红外激光定位系统用于确定AR/MR显示装置在现场区域内的3D坐标和姿态。在一些情况下,一个确定的区域范围内,可以利用相对位置对AR/MR显示装置进行定位,同理也可以用相对位置预设虚拟影像的显示坐标。当AR/MR显示装置进入到该确定区域范围时,在相对位置显示预设虚拟影像。或者通过图像识别确认现实影像中的特征点,在与特性点的预设相对位置显示虚拟影像。在一些情况中,根据地理定位系统确定AR/MR显示装置所在的地理位置,当所述AR/MR显示装置位于预设地理坐标范围时,在预设相对位置显示虚拟影像。
根据本发明的第三方面,本发明提供一种存储程序指令的非暂态计算机可读介质,所述程序指令当由处理设备执行时使得所述设备:
基于地理定位系统和红外激光定位系统的定位,在AR/MR显示装置中显示预设虚拟影像。
优选地,所述虚拟影像具有预设的定位坐标。
优选地,所述虚拟影像的预设定位坐标包括地理坐标。
优选地,所述虚拟影像的预设定位坐标包括相对位置坐标。所述相对位 置坐标包括相对地理坐标和/或相对3D坐标。
本发明的一些实施方式中,所述红外激光定位系统用于确定AR/MR显示装置在现场区域内的3D坐标和姿态。在一些情况下,一个确定的区域范围内,可以利用相对位置对AR/MR显示装置进行定位,同理也可以用相对位置预设虚拟影像的显示坐标。当AR/MR显示装置进入到该确定区域范围时,在相对位置显示预设虚拟影像。或者通过图像识别确认现实影像中的特征点,在与特性点的预设相对位置显示虚拟影像。在一些情况中,根据地理定位系统确定AR/MR显示装置所在的地理位置,当所述AR/MR显示装置位于预设地理坐标范围时,在预设相对位置显示虚拟影像。
附图说明
下面将通过参照附图详细描述本发明的优选实施例,使本领域的普通技术人员更清楚本发明的上述及其它特征和优点,附图中:
图1示出基于定位系统和AR/MR的探测系统的示例;
图2示出本发明所用的全球定位系统的示例;
图3示出AR/MR显示装置的一个实施例;
图4示出AR/MR显示装置相关的处理单元的一个实施例;
图5示出实现本发明的探测系统的计算机系统的一个实施例;
图6示出AR/MR探测系统、红外激光定位/扫描系统与显示装置相互协作的流程图。
具体实施方式
在下文的描述中,给出了大量具体的细节以便提供对本发明更为彻底的理解。然而,对于本领域技术人员来说显而易见的是,本发明可以无需一个或多个这些细节而得以实施。在其他的例子中,为了避免与本发明发生混淆,对于本领域公知的一些技术特征未进行描述。
在本发明中,使用GPS或移动通讯信号作为全球定位系统。当使用移动通讯信号时,可以用以下方法增强信号:设置在待探测位置附近的多个定位器,用于周期性的向周围发送定位信号,定位器的定位信号覆盖范围作为该定位器的定位区域。本实施例中,定位器周期性向周围发送球形低频电磁场(覆盖半径由对应环境及发射功率决定)。
附在探测者上的定位标签,当定位标签进入某定位器的定位区域时,接收该定位器发出的定位信号,从中解析出该定位器的ID号;并将该定位器的ID号和该定位标签的ID号发送给定位通信基站。定位标签主要用于被定位对象的定位,其功能是接收定位器发出的低频磁场信号以及从中解析出该定位器的ID号。
一个或多个定位通信基站,用于实现对所有定位器的定位区域的无线信号覆盖,并把定位通信基站的ID号、接收到的定位器的ID号和定位标签的ID号以及定位时间(定位通信基站把接收到定位标签发送信号时的时间作为定位时间)发送给定位引擎服务器。
定位引擎服务器,通过以太网与定位通信基站连接,接收上述定位通信基站的ID号、定位器的ID号和定位标签的ID号以及定位时间,进行处理后获得定位标签的移动轨迹(也就是该定位标签携带者的移动轨迹)。
本发明中用于实现混合现实环境的系统可包括与中枢计算系统通信的移动显示设备。移动显示设备可包括耦合到头戴式显示设备(或其他合适的装置)的移动处理单元。
头戴式显示设备可包括显示元件。该显示元件在一定程度上透明,使得用户可透过该显示元件看到该用户的视野(FOV)内的现实世界物体。该显示元件还提供将虚拟图像投影到该用户的FOV中以使得所述虚拟图像也可出现在现实世界物体旁边的能力。该系统自动地跟踪用户所看之处,从而该系统可确定将虚拟图像插入到该用户的FOV中的何处。一旦该系统知晓要将该虚拟图像投影至何处,就使用该显示元件投影该图像。
在实施例中,中枢计算系统和一个或多个处理单元可以协作以构建包括房间或其他环境中的所有用户、现实世界物体和虚拟三维物体的x、y、z笛卡尔位置的环境的模型。由该环境中的用户佩戴的每个头戴式显示设备的位置可以被校准到该环境的所述模型并且被彼此校准。这允许该系统确定每个用户的视线以及该环境的FOV。从而,可向每个用户显示虚拟图像,但是该系统从每个用户的视角确定该虚拟图像的显示,从而针对来自或由于该环境中的其他物体的任何视差以及遮挡来调整该虚拟图像。该环境的所述模型(在本文中被称为场景图)以及对用户的FOV以及该环境中的物体的跟踪可由协力或独立工作的中枢和移动处理单元来生成。
如下文所阐述的,一个或多个用户可选择与出现在用户的FOV内的共享的或私有的虚拟物体交互。如本文所使用的,术语“交互”涵盖用户与虚拟物体的身体交互和语言交互两者。
如本文所使用的,用户简单地看向虚拟物体(诸如查看共享虚拟物体中的内容)是用户与虚拟物体的身体交互的另一示例。
如在图3和4中看到的,头戴式显示设备2可包括集成处理单元4。在其 他实施例中,处理单元4可以与头戴式显示设备2分开,且可经由有线或无线通信来与头戴式显示设备2通信。
在一个实施例中为眼镜形状的头戴式显示设备2被佩戴在用户的头上,使得用户可以透过显示器进行查看,并且从而具有该用户前方的空间的实际直接视图。使用术语“实际直接视图”来指代直接用人眼看见现实世界物体的能力,而不是看见物体的被创建的图像表示。例如,通过眼镜看房间允许用户得到该房间的实际直接视图,而在电视机上观看房间的视频不是该房间的实际直接视图。下面提供头戴式显示设备2的更多细节。
处理单元4可包括用于操作头戴式显示设备2的计算能力中的许多能力。在一些实施例中,处理单元4与一个或多个中枢计算系统12无线地(例如,WiFi、蓝牙、红外、或其他无线通信手段)通信。如此后解释的,中枢计算系统12可以在处理单元4的远程提供,使得中枢计算系统12和处理单元4经由诸如LAN或WAN等无线网络来通信。在进一步实施例中,中枢计算系统12可被省略以使用头戴式显示设备2和处理单元4来提供移动混合现实体验。
中枢计算系统12可以是计算机、游戏系统或控制台等等。根据一示例实施例,中枢计算系统12可以包括硬件组件和/或软件组件,使得中枢计算系统12可被用于执行诸如游戏应用、非游戏应用等等之类的应用。在一个实施例中,中枢计算系统可包括诸如标准化处理器、专用处理器、微处理器等等之类的处理器,这些处理器可以执行存储在处理器可读存储设备上的指令来执行本文所述的过程。
中枢计算系统12进一步包括捕捉设备,该捕捉设备用于从其FOV内的场景的一些部分中捕捉图像数据。如本文所使用的,场景是用户在其中到处移动的环境,这一环境在捕捉设备的FOV内和/或每一头戴式显示设备2的FOV内被捕捉。可以存在多个捕捉设备,这些捕捉设备彼此协作以从所述多个捕捉设备20的合成FOV内的场景中集体地捕捉图像数据。捕捉设备20可包括一个或多个相机,相机在视觉上监视用户18和周围空间,使得可以捕捉、分析并跟踪该用户所执行的姿势和/或移动以及周围空间的结构,以在应用内执行一个或多个控制或动作和/或使化身或屏上人物动画化。
中枢计算系统12可被连接到诸如电视机、监视器、高清电视机(HDTV)等可提供游戏或应用视觉的视听设备16。在一个示例中,视听设备16包括内置扬声器。在其他实施例中,视听设备16和中枢计算系统12可被连接到外部扬声器22。
中枢计算系统12与头戴式显示设备2和处理单元4一起可以提供混合现实体验,其中一个或多个虚拟图像(如图1中的虚拟物体21)可与场景中的现实世界物体混合在一起。图1例示出作为出现在用户的FOV内的现实世界物体的植物23或用户的手23的示例。
控制电路136提供支持头戴式显示设备2的其他组件的各种电子装置。控制电路136的更多细节在下文参照图4提供。处于镜腿102内部或安装到镜腿102的是耳机130、惯性测量单元132、以及温度传感器138。在图4中所示的一个实施例中,惯性测量单元132(或IMU132)包括惯性传感器,诸如三轴磁力计132A、三轴陀螺仪132B以及三轴加速度计132C。惯性测量单元132感测头戴式显示设备2的位置、定向和突然加速度(俯仰、滚转和偏航)。除了磁力计132A、陀螺仪132B和加速度计132C之外或者取代磁力计132A、陀螺仪132B和加速度计132C,IMU132还可包括其他惯性传感器。
微显示器120通过透镜122来投影图像。存在着可被用于实现微显示器120的不同的图像生成技术。例如,微显示器120可以使用透射投影技术来实现,其中光源由光学活性材料来调制,用白光从背后照亮。这些技术通常是使用具有强大背光和高光能量密度的LCD类型的显示器来实现的。微显示器120还可使用反射技术来实现,其中外部光被光学活性材料反射并调制。取决于该技术,照明是由白光源或RGB源来向前点亮的。数字光处理(DLP)、硅上液晶(LCOS)、以及来自高通公司的显示技术都是高效的反射技术的示例(因为大多数能量从已调制结构反射离开)并且可被用在本系统中。附加地,微显示器120可以使用发射技术来实现,其中光由该显示器生成。例如,来自Microvision有限公司的PicoP TM显示引擎使用微型镜面舵来将激光信号发射到担当透射元件的小型屏幕上或直接将光束(例如,激光)发射到眼睛。
图3是描绘了头戴式显示设备2的各个组件的框图。图4是描述处理单元4的各种组件的框图。头戴式显示设备2(其组件在图4中被描绘)被用于通过将一个或多个虚拟图像与用户对现实世界的视图的无缝融合来向用户提供混合现实体验。另外,图4的头戴式显示设备组件包括跟踪各种状况的许多传感器。头戴式显示设备2将从处理单元4接收关于虚拟图像的指令,并且将把传感器信息提供回给处理单元4。处理单元4(其组件在图4中被描绘)将从头戴式显示设备2接收传感信息,并且将与中枢计算设备12交换信息和数据。基于该信息和数据的交换,处理单元4将确定在何处以及在何时向用户提供虚拟图像并相应地将指令发送给图5的头戴式显示设备。
在一个实施例中,控制电路200的所有组件都通过专用线路或一个或多个总线彼此进行通信。在另一实施例中,控制电路200的每个组件都与处理器210通信。相机接口216提供到两个朝向房间的相机112的接口,并且将从朝向房间的相机所接收到的图像存储在相机缓冲器218中。显示驱动器220将驱动微显示器120。显示格式化器222向控制不透明滤光器114的不透明度控制电路224提供关于微显示器120上所正显示的虚拟图像的信息。定时发生器226被用于向该系统提供定时数据。显示输出接口228是用于将图像从朝向房间的相机112提供给处理单元4的缓冲器。显示输入接口230是用于接收诸如要在微显示器120 上显示的虚拟图像之类的图像的缓冲器。显示输出接口228和显示输入接口230与作为到处理单元4的接口的带接口232通信。
电源管理电路202包括电压调节器234、眼睛跟踪照明驱动器236、音频DAC和放大器238、话筒前置放大器和音频ADC240、温度传感器接口242、以及时钟发生器244。电压调节器234通过带接口232从处理单元4接收电能,并将该电能提供给头戴式显示设备2的其他组件。每个眼睛跟踪照明驱动器236都如上面所述的那样为眼睛跟踪照明134A提供IR光源。音频DAC和放大器238向耳机130输出音频信息。话筒前置放大器和音频ADC240提供用于话筒110的接口。温度传感器接口242是用于温度传感器138的接口。电源管理电路202还向三轴磁力计132A、三轴陀螺仪132B以及三轴加速度计132C提供电能并从其接收回数据。
图4是描述处理单元4的各种组件的框图。图4示出与电源管理电路306通信的控制电路304。控制电路304包括:中央处理单元(CPU)320、图形处理单元(GPU)322、高速缓存324、RAM326、与存储器330(例如D-RAM)进行通信的存储器控制器328、与闪存334(或其他类型的非易失性存储)进行通信的闪存控制器332、通过带接口302和带接口232与头戴式显示设备2进行通信的显示输出缓冲器336、通过带接口302和带接口232与头戴式显示设备2进行通信的显示输入缓冲器338、与用于连接到话筒的外部话筒连接器342进行通信的话筒接口340、用于连接到无线通信设备346的PCIexpress接口、以及(一个或多个)USB端口348。在一个实施例中,无线通信设备346可包括启用Wi-Fi的通信设备、蓝牙通信设备、红外通信设备等。USB端口可被用于将处理单元4对接到中枢计算系统12,以便将数据或软件加载到处理单元4上以及对处理单元4进行充电。在一个实施例中,CPU320和GPU322是用于确定在何处、何时以及如何向用户的视野内插入虚拟三维物体的主要力量。以下提供更多细节。
电源管理电路306包括时钟发生器360、模数转换器362、电池充电器364、电压调节器366、头戴式显示器电源376、以及与温度传感器374进行通信的温度传感器接口372(其可能位于处理单元4的腕带上)。模数转换器362被用于监视电池电压、温度传感器,以及控制电池充电功能。电压调节器366与用于向该系统提供电能的电池368进行通信。电池充电器364被用来在从充电插孔370接收到电能之际(通过电压调节器366)对电池368进行充电。HMD电源376向头戴式显示设备2提供电力。
相机组件423可包括可被用于捕捉场景的深度图像的红外(IR)光组件425、三维(3D)相机426、以及RGB(视觉图像)相机428。例如,在飞行时间分析中,捕捉设备20的IR光组件425可将红外光发射到场景上,并且然后可使用传感器(在一些实施例中包括未示出的传感器)、例如使用3-D相机426和/或RGB相机428来检测从场景中的一个或多个目标和物体的表面后向散射的光。
在一示例实施例中,捕捉设备20可进一步包括可与图像相机组件423进行通信的处理器432。处理器432可包括可执行指令的标准处理器、专用处理器、微处理器等,这些指令例如包括用于接收深度图像、生成合适的数据格式(例如,帧)以及将数据传送给中枢计算系统12的指令。
捕捉设备20可进一步包括存储器434,存储器434可存储由处理器432执行的指令、由3-D相机和/或RGB相机所捕捉的图像或图像帧、或任何其他合适的信息、图像等等。根据一示例实施例,存储器434可包括随机存取存储器(RAM)、只读存储器(ROM)、高速缓存、闪存、硬盘或任何其他合适的存储组件。如图6中所示,在一个实施例中,存储器434可以是与图像相机组件423和处理器432通信的单独组件。根据另一实施例,存储器434可被集成到处理器432和/或图像捕捉组件423中。
捕捉设备20通过通信链路436与中枢计算系统12通信。通信链路436可以是包括例如USB连接、火线连接、以太网电缆连接等有线连接和/或诸如无线802.11b、802.11g、802.11a或802.11n连接等的无线连接。根据一个实施例,中枢计算系统12可经由通信链路436向捕捉设备20提供可被用于确定何时捕捉例如场景的时钟。附加地,捕捉设备20经由通信链路436将由例如3-D相机426和/或RGB相机428捕捉的深度信息和视觉(例如RGB)图像提供给中枢计算系统12。在一个实施例中,深度图像和视觉图像以每秒30帧的速率被传送;但是可以使用其他帧速率。中枢计算系统12然后可以创建模型并使用模型、深度信息、以及所捕捉的图像来例如控制诸如游戏或文字处理程序等的应用和/或使化身或屏上人物动画化。
上述中枢计算系统12与头戴式显示设备2和处理单元4一起能够将虚拟三维物体插入到一个或多个用户的FOV中,使得该虚拟三维物体扩充和/或替换现实世界的视图。在一个实施例中,头戴式显示设备2、处理单元4以及中枢计算系统12一起工作,因为这些设备中的每一个都包括被用来获得用以确定何处、何时以及如何插入虚拟三维物体的数据的传感器子集。在一个实施例中,确定何处、何时以及如何插入虚拟三维物体的计算由彼此合作地工作的中枢计算系统12和处理单元4执行。然而,在又一些实施例中,所有计算都可由单独工作的中枢计算系统12或单独工作的(一个或多个)处理单元4执行。在其他实施例中,计算中的至少一些可由头戴式显示设备2执行。
中枢12可进一步包括用于识别和跟踪另一用户的FOV内的用户的骨架跟踪模块450。枢12可进一步包括用于识别用户执行的姿势的姿势识别引擎454。
在一个示例实施例中,中枢计算设备12和处理单元4一起工作以创建所述一个或多个用户所在的环境的场景图或模型,以及跟踪该环境中各种移动的物体。此外,中枢计算系统12和/或处理单元4通过跟踪用户18佩戴的头戴式显示设备2的位置和定向来跟踪头戴式显示设备2的FOV。头戴式显示设备2所 获得的传感器信息被传送给处理单元4。在一个实施例中,该信息被传送给中枢计算系统12,该中枢计算系统12更新场景模型并且将其传送回处理单元。处理单元4随后使用它从头戴式显示设备2接收的附加传感器信息来细化用户的FOV并且向头戴式显示设备2提供关于在何处、何时以及如何插入虚拟物体的指令。基于来自捕捉设备20和(一个或多个)头戴式显示设备2中的相机的传感器信息,可在一闭环反馈系统中在中枢计算系统12和处理单元4之间周期性地更新场景模型和跟踪信息,如下面所解释的那样。
图5例示出可被用于实现中枢计算系统12的计算系统的示例实施例。如图5中所示,多媒体控制台500具有含有一级高速缓存502、二级高速缓存504和闪存ROM(只读存储器)506的中央处理单元(CPU)501。一级高速缓存502和二级高速缓存504临时存储数据,并且因此减少存储器访问周期的数量,由此改进处理速度和吞吐量。CPU501可被配备为具有一个以上的内核,并且由此具有附加的一级和二级高速缓存502和504。闪存ROM506可存储在多媒体控制台500通电时在引导过程的初始化阶段加载的可执行代码。
图形处理单元(GPU)508和视频编码器/视频编解码器(编码器/解码器)514形成用于高速和高分辨率图形处理的视频处理流水线。经由总线从图形处理单元508向视频编码器/视频编解码器514运送数据。视频处理流水线向A/V(音频/视频)端口540输出数据,用于传输至电视或其他显示器。存储器控制器510连接到GPU508以方便处理器访问各种类型的存储器512,诸如但不局限于RAM(随机存取存储器)。
多媒体控制台500包括优选地在模块518上实现的I/O控制器520、系统管理控制器522、音频处理单元523、网络接口524、第一USB主控制器526、第二USB控制器528以及前面板I/O子部件530。USB控制器526和528用作外围控制器542(1)-542(2)、无线适配器548、以及外置存储器设备546(例如,闪存、外置CD/DVDROM驱动器、可移动介质等)的主机。网络接口524和/或无线适配器548提供对网络(例如,因特网、家庭网络等)的访问,并且可以是包括以太网卡、调制解调器、蓝牙模块、电缆调制解调器等的各种不同的有线或无线适配器组件中的任何一种。
系统存储器543被提供来存储在引导过程期间加载的应用数据。提供媒体驱动器544,且其可包括DVD/CD驱动器、蓝光驱动器、硬盘驱动器、或其他可移动媒体驱动器等。媒体驱动器544可位于多媒体控制台500的内部或外部。应用数据可经由介质驱动器544访问,以供多媒体控制台500执行、回放等。介质驱动器544经由诸如串行ATA总线或其他高速连接(例如IEEE1394)等总线连接到I/O控制器520。
系统管理控制器522提供与确保多媒体控制台500的可用性相关的各种服务功能。音频处理单元523和音频编解码器532形成具有高保真度和立体声处理 的相应音频处理流水线。音频数据经由通信链路在音频处理单元523与音频编解码器532之间传输。音频处理流水线将数据输出到A/V端口540,以供外置音频用户或具有音频能力的设备再现。
前面板I/O子部件530支持暴露在多媒体控制台500的外表面上的电源按钮550和弹出按钮552、以及任何LED(发光二极管)或其他指示器的功能。系统供电模块536向多媒体控制台500的组件供电。风扇538冷却多媒体控制台500内的电路。
多媒体控制台500内的CPU501、GPU508、存储器控制器510、以及各种其他组件经由一条或多条总线互连,总线包括串行和并行总线、存储器总线、外围总线、以及使用各种总线架构中的任一种的处理器或局部总线。作为示例,这些架构可以包括外围部件互连(PCI)总线、PCI-Express总线等。
当多媒体控制台500通电时,应用数据可从系统存储器543被加载到存储器512和/或高速缓存502、504中并在CPU501上执行。应用可在导航到多媒体控制台500上可用的不同媒体类型时呈现提供一致用户体验的图形用户界面。在操作中,介质驱动器544中所包含的应用和/或其他媒体可从介质驱动器544启动或播放,以将附加功能提供给多媒体控制台500。
多媒体控制台500可通过简单地将该系统连接到电视机或其他显示器而作为独立系统来操作。
捕捉设备20可经由USB控制器526或其他接口来定义控制台500的附加输入设备。在其他实施例中,中枢计算系统12可以使用其他硬件架构来实现。没有一个硬件架构是必需的。
头戴式显示设备2和处理单元4(有时一起被称为移动显示设备)与一个中枢计算系统12(亦称中枢12)通信。移动显示设备中的每一个可如上述那样使用无线通信与中枢通信。在这样的实施例中所构思的是,有用于移动显示设备的信息中的许多信息都将在中枢处被计算和存储并且被传送给每个移动显示设备。例如,中枢将生成环境的模型并且将该模型提供给与该中枢通信的所有移动显示设备。附加地,中枢可以跟踪移动显示设备以及房间中的移动物体的位置和定向,并且然后将该信息传输给每个移动显示设备。
在另一实施例中,系统可以包括多个中枢12,其中每个中枢都包括一个或多个移动显示设备。中枢可彼此直接通信或经由因特网(或其他网络)通信。
此外,在另外的实施例中,中枢12可以被完全省略。这样的实施例的一个优点是,本系统的混合现实体验变为完全移动的,并且可以被用在室内和室外设定二者中。在这样的一实施例中,下面的描述中由中枢12执行的所有功能都可以可替代地由处理单元4之一、合作地工作的一些处理单元4、或者合作地工作的所有处理单元4来执行。在这样的一实施例中,相应的移动显示设备2执行系统10的所有功能,包括生成和更新状态数据、场景图、每个用户对场景图的视 图、所有纹理和渲染信息、视频和音频数据、以及为了执行本文所述的操作的其他信息。
中枢12和处理单元4从场景收集数据。对于中枢12,这可以是由捕捉设备20的深度相机426和RGB相机428感测到的图像和音频数据。对于处理单元4,这可以是由头戴式显示设备2在步骤656感测到的图像数据,且具体而言,是由相机112、眼睛跟踪组件134和IMU132感测到的图像数据。在步骤656,由头戴式显示设备2收集的数据被发送给处理单元4。处理单元4在步骤630中处理这一数据,以及将它发送给中枢12。
中枢12执行允许中枢12协调其捕捉设备20和一个或多个处理单元4的图像数据的各种设置步骤。具体而言,即使捕捉设备20相对于场景的位置是已知的(情况可能不一定如此),头戴式显示设备2上的相机也在场景中四处移动。因此,在一些实施例中,成像相机中的每一个的位置和时间捕捉需要被校准到该场景、彼此校准、以及校准到中枢12。
首先确定系统10中的各种成像设备的时钟偏移。具体而言,为了协调来自该系统中的相机中的每一个相机的图像数据,可确认被协调的图像数据来自同一时间。一般而言,来自捕捉设备20的图像数据和从一个或多个处理单元4传入的图像数据用中枢12中的单个主时钟打上时间戳。对于一给定帧的所有此类数据使用时间戳,以及使用相机中的每一个相机的已知分辨率,中枢12确定该系统中的成像相机中的每一个成像相机的时间偏移。据此,中枢12可确定从每个相机接收到的图像之间的差异以及对这些图像的调整。
中枢12可从相机之一接收到的帧中选择一基准时间戳。然后,中枢12可向从所有其他相机接收到的图像数据加上时间或从该图像数据减去时间以与基准时间戳同步。理解到,针对校准过程,可以使用各种其他操作来确定时间偏移和/或将不同相机同步在一起。时间偏移的确定可以在初始接收到来自所有相机的图像数据之际被执行一次。替代地,它可被周期性地执行,诸如举例来说每一帧或某一数量的帧。
然后,在场景的x、y、z笛卡尔空间中校准所有相机相对于彼此的位置的操作。一旦知晓此信息,中枢12和/或一个或多个处理单元4就能够形成场景图或模型,标识该场景的几何形状和该场景内的物体(包括用户)的几何形状和位置。在将所有相机的图像数据彼此校准时,可使用深度和/或RGB数据。
接着,中枢12可将由每个相机捕捉的经失真校正的图像数据点从相机视图转换成正交3D世界视图。此正交3D世界视图是由捕捉设备20和头戴式显示设备相机所捕捉的所有图像数据在正交x,y,z笛卡尔坐标系中的点云图。用于将相机视图转换成正交3D世界视图的矩阵变换公式是已知的。
在本发明中,基于地理定位系统和红外激光定位系统的定位,在AR/MR 显示装置中显示预设虚拟影像。
所述虚拟影像具有预设的定位坐标,包括但不限于地理坐标、3D坐标、或相对坐标。所述相对坐标包括相对地理坐标和/或相对3D坐标。
虚拟影像使使用者在AR/MR显示装置中看到待探测或待施工物。例如深埋地下的管线或待打孔的位置等。因而虚拟影像坐标的设定与定位系统紧密联系。例如可以设定虚拟影像的地理坐标,当AR/MR显示装置位于预设的地理坐标范围内时,根据AR/MR显示装置的3D坐标和姿态,出现在视野中相应的位置。考虑到处理器的效率和操作的便利性,也可以设定相对坐标,例如在某确定的区域探测或施工,可以以某标志物作为特征点,预设待探测或待施工物相对于该特征点的相对坐标,例如距离某井盖向正东2米,深1米;或在某木牌正中部分,距上下边缘20cm处,等。
可以采用多个标志物作为特征点进行定位,其准确度会更高。当AR/MR显示装置中出现标志物的现实影像时,通过图像识别确认特征点,在现实影像上叠加预设的虚拟影像。虚拟影像的位置随着AR/MR显示装置的位置和姿态实时调整,使用者可以看到与现实影像相对应的虚拟影像。
利用地理坐标预设虚拟影像的情况相对简单,然而其准确度取决于所采用的地理定位系统的精度,和周围的环境条件。优选采用地理坐标和相对位置补充的方式预设虚拟影像的位置。本领域技术人员可以根据实际应用的需要和预算任意选择。
在本发明的一些实施方式中,可以通过云端远程添加虚拟影像。根据捕捉设备(例如红外激光扫描系统)获取使用者(默认和AR/MR显示装置的位置一致)所在环境的场景信息,由远程根据新获取的信息,添加预设虚拟影像在该场景下的坐标。例如探测未知洞穴,通过捕捉设备获取洞穴外貌特征和地理坐标,在远程将其转化为已知场景,设定虚拟影像的坐标。在该处的使用者在符合设定的位置可以通过AR/MR显示装置看到由远程新添加的虚拟影像。
实施例1
使用Microsoft Hololens作为AR/MR显示设备,
在探测者附近的几个固定地点设置红外定位器,接收红外激光信号。利用Trimble(天宝)公司提供的GPS定位技术和Google Project Tango的3D建模技 术与数据库中管线的GIS数据叠加形成AR/MR探测系统的数据库。
根据AR/MR显示设备的GPS数据以及FOV数据,在显示屏上显示出虚拟的管线图像。虚拟的管线图像在显示屏中的位置需与FOV对应的GIS数据相吻合。
探测者通过Microsoft Hololens可以轻易地识别隐藏的管线的位置,并在相应的位置进行施工或探测。
实施例2
使用智能手机作为AR/MR显示设备,
在待测区域附近设置地理定位器,接收移动通信信号进行定位。利用Google Project Tango技术的3D建模技术、移动通信基站数据与数据库中管线的GIS数据叠加形成AR/MR探测系统的数据库。
根据AR/MR显示设备的GPS数据以及智能手机的位置传感器的数据,在显示屏上显示出虚拟的管线图像。虚拟的管线图像在显示屏中的位置需与智能手机的位置对应的GIS数据相吻合。
实施例3
使用Microsoft Hololens作为AR/MR显示设备,
在施工者附近的几个固定地点设置红外定位器,接收红外激光信号。利用Trimble(天宝)公司提供的GPS定位技术和Google Project Tango的3D建模技术对待施工现场进行建模,并在相应位置设置待施工目标的虚拟图像,叠加形成AR/MR施工系统的数据库。
根据AR/MR显示设备的GPS数据以及FOV数据,在显示屏上显示出待施工目标的虚拟图像。该图像在FOV中的位置需与实际需要施工的位置吻合。
探测者通过Microsoft Hololens可以看到待施工位置,例如打孔位置、放置物品的位置等。从而省去了丈量的麻烦,适用于不能由机器完全替代人类的施工现场。
此外,本公开的实施例可以用硬件、固件、软件或其任何组合来实现。本公开的实施例还可以被实现为存储在机器可读介质上的指令,其可以由一个或多个处理器读取和执行。机器可读介质可以包括用于以机器(例如,计算设备)可读的 形式存储或发送信息的任何机制。例如,机器可读介质可以包括只读存储器(ROM);随机存取存储器(RAM);磁盘存储介质;光存储介质;闪存设备;电、光、声或其它形式的传播信号(例如载波、红外信号、数字信号等)以及其它。此外,固件、软件、例程、指令在本文中可以被描述为执行某些动作。但是,应当认识到的是,这样的描述仅仅是为了方便起见,并且这些动作实际上起因于计算设备、处理器、控制器或其它设备执行固件、软件、例程、指令等。
应当认识到的是,具体实施方式部分而不是发明内容和摘要部分旨在用于解释权利要求。发明内容和摘要部分可以阐述由(一个或多个)发明人所构想的本发明的一个或多个但不是全部的示例性实施例,并且因此不旨在以任何方式限制本发明或所附权利要求。
虽然本文已经参考用于示例性领域和应用的示例性实施例描述了本发明,但是应当理解的是,本发明不限于此。其它实施例和对其的修改是可能的,并且在本发明的范围和精神内。例如,并且在不限制本段落的一般性的情况下,实施例不限于图中所示和/或本文所述的软件、硬件、固件和/或实体。此外,实施例(无论是否本文明确描述的)对本文所述示例之外的其它领域和应用具有显著的效用。
本文已经借助于示出具体功能及其关系的实现的功能构建块描述了实施例。为了方便描述,这些功能构建块的边界在本文已被任意定义。只要适当地执行具体功能和关系(或其等同物),就可以定义替代边界。此外,替代实施例可以利用与本文所描述的不同的顺序来执行功能块、步骤、操作、方法等。
本文中对“一种实施例”、“实施例”、“示例实施例”或类似短语的引用指示所描述的实施例可以包括特定特征、结构或特性,但是每个实施例可以不一定包括特定特征、结构或特性。另外,这些短语不一定指代相同的实施例。此外,无论本文是否明确提及或描述,当结合实施例描述特定特征、结构或特性时,将这些特征、结构或特性结合到其它实施例中将在相关领域的技术人员的知识范围之内。
本发明的广度和范围不应当受到任何上述示例性实施例的限制,而是应当仅根据所附权利要求及其等同物来限定。

Claims (24)

  1. 一种基于定位系统和AR/MR的探测系统,其包括:
    AR/MR显示装置,用于显示AR/MR影像;
    地理定位系统,用于确定所述AR/MR显示装置的地理位置;
    红外激光定位系统,用于确定所述AR/MR显示装置在确定区域内的3D坐标和姿态;
    数据库,用于存储待测区域中预设待测物的数据及预设的虚拟影像;和
    处理单元,用于根据AR/MR显示装置的地理位置、3D坐标和姿态将预设的虚拟影像叠加在AR/MR显示装置所显示的现实影像。
  2. 根据权利要求1所述的系统,其中所述地理定位系统包括GPS、GMS、LTE定位系统中的一种或多种。
  3. 根据权利要求1所述的系统,其中所述地理定位系统包括地面基站和/或地面信号增强点。
  4. 根据权利要求1所述的系统,其中所述数据库和/或处理单元的一部分可以位于云端。
  5. 根据权利要求1所述的系统,其中所述AR/MR显示装置包括头戴式显示设备、智能手机、或平板电脑。
  6. 根据权利要求1所述的系统,其中所述系统还包括探测装置。
  7. 根据权利要求6所述的系统,其中所述探测装置包括传感器。
  8. 根据权利要求1所述的系统,其中所述AR/MR显示装置可接收使用者的互动信息。
  9. 根据权利要求6所述的系统,其中所述探测装置包括捕捉设备,所述捕捉设备用于采集场景信息。
  10. 根据权利要求9所述的系统,其中所述捕捉设备包括深度照相机。
  11. 一种基于定位系统和AR/MR的探测方法,其包括:
    基于地理定位系统和红外激光定位系统的定位,在AR/MR显示装置中显示预设虚拟影像。
  12. 根据权利要求11所述的方法,其中所述虚拟影像具有预设的定位坐标。
  13. 根据权利要求12所述的方法,其中所述虚拟影像的预设定位坐标 包括地理坐标。
  14. 根据权利要求12所述的方法,其中所述虚拟影像的预设定位坐标包括相对位置坐标。
  15. 根据权利要求12所述的方法,其中所述相对位置坐标包括相对地理坐标和/或相对3D坐标。
  16. 根据权利要求12或15所述的方法,其中通过图像识别确认AR/MR显示装置中显示的现实影像中的特征点,在与特性点的预设相对位置显示虚拟影像。
  17. 根据权利要求12或15所述的方法,其中根据地理定位系统确定AR/MR显示装置所在的地理位置,当所述AR/MR显示装置位于预设地理坐标范围时,在预设相对位置显示虚拟影像。
  18. 一种存储程序指令的非暂态计算机可读介质,所述程序指令当由处理设备执行时使得所述设备:
    基于地理定位系统和红外激光定位系统的定位,在AR/MR显示装置中显示预设虚拟影像。
  19. 根据权利要求18所述的非暂态计算机可读介质,其中所述虚拟影像具有预设的定位坐标。
  20. 根据权利要求19所述的非暂态计算机可读介质,其中所述虚拟影像的预设定位坐标包括地理坐标。
  21. 根据权利要求19所述的非暂态计算机可读介质,其中所述虚拟影像的预设定位坐标包括相对位置坐标。
  22. 根据权利要求19所述的非暂态计算机可读介质,其中所述相对位置坐标包括相对地理坐标和/或相对3D坐标。
  23. 根据权利要求19或22所述的非暂态计算机可读介质,其中通过图像识别确认AR/MR显示装置中显示的现实影像中的特征点,在与特性点的预设相对位置显示虚拟影像。
  24. 根据权利要求19或22所述的非暂态计算机可读介质,其中根据地理定位系统确定AR/MR显示装置所在的地理位置,当所述AR/MR显示装置位于预设地理坐标范围时,在预设相对位置显示虚拟影像。
PCT/CN2017/117880 2016-12-22 2017-12-22 基于定位系统和ar/mr的探测系统及探测方法 WO2018113759A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201611200201.1 2016-12-22
CN201611200201 2016-12-22

Publications (1)

Publication Number Publication Date
WO2018113759A1 true WO2018113759A1 (zh) 2018-06-28

Family

ID=62392147

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/117880 WO2018113759A1 (zh) 2016-12-22 2017-12-22 基于定位系统和ar/mr的探测系统及探测方法

Country Status (2)

Country Link
CN (1) CN108132490A (zh)
WO (1) WO2018113759A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935275A (zh) * 2020-08-06 2020-11-13 杭州巨骐信息科技股份有限公司 一种智能井盖的管理系统
CN112333491A (zh) * 2020-09-23 2021-02-05 字节跳动有限公司 视频处理方法、显示装置和存储介质
CN113239446A (zh) * 2021-06-11 2021-08-10 重庆电子工程职业学院 一种室内信息量测方法及系统

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020244576A1 (zh) * 2019-06-05 2020-12-10 北京外号信息技术有限公司 基于光通信装置叠加虚拟对象的方法和相应的电子设备
CN111242704B (zh) * 2020-04-26 2020-12-08 北京外号信息技术有限公司 用于在现实场景中叠加直播人物影像的方法和电子设备
WO2022036472A1 (zh) * 2020-08-17 2022-02-24 南京翱翔智能制造科技有限公司 一种基于混合尺度虚拟化身的协同交互系统
CN112866672B (zh) * 2020-12-30 2022-08-26 深圳卡乐星球数字娱乐有限公司 一种用于沉浸式文化娱乐的增强现实系统及方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010046123A1 (en) * 2008-10-23 2010-04-29 Lokesh Bitra Virtual tagging method and system
US20130278633A1 (en) * 2012-04-20 2013-10-24 Samsung Electronics Co., Ltd. Method and system for generating augmented reality scene
CN104660995A (zh) * 2015-02-11 2015-05-27 尼森科技(湖北)有限公司 一种救灾救援可视系统
CN104702871A (zh) * 2015-03-19 2015-06-10 世雅设计有限公司 无人机投影显示方法、系统及装置
CN105212418A (zh) * 2015-11-05 2016-01-06 北京航天泰坦科技股份有限公司 基于红外夜视功能的增强现实智能头盔研制

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101833115B (zh) * 2010-05-18 2013-07-03 山东师范大学 基于增强现实技术的生命探测与救援系统及其实现方法
CN201673267U (zh) * 2010-05-18 2010-12-15 山东师范大学 基于增强现实技术的生命探测与救援系统
US9329286B2 (en) * 2013-10-03 2016-05-03 Westerngeco L.L.C. Seismic survey using an augmented reality device
CN106019364B (zh) * 2016-05-08 2019-02-05 大连理工大学 煤矿开采过程中底板突水预警系统及方法
CN205680051U (zh) * 2016-05-13 2016-11-09 哲想方案(北京)科技有限公司 一种虚拟现实系统
CN106019265A (zh) * 2016-05-27 2016-10-12 北京小鸟看看科技有限公司 一种多目标定位方法和系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010046123A1 (en) * 2008-10-23 2010-04-29 Lokesh Bitra Virtual tagging method and system
US20130278633A1 (en) * 2012-04-20 2013-10-24 Samsung Electronics Co., Ltd. Method and system for generating augmented reality scene
CN104660995A (zh) * 2015-02-11 2015-05-27 尼森科技(湖北)有限公司 一种救灾救援可视系统
CN104702871A (zh) * 2015-03-19 2015-06-10 世雅设计有限公司 无人机投影显示方法、系统及装置
CN105212418A (zh) * 2015-11-05 2016-01-06 北京航天泰坦科技股份有限公司 基于红外夜视功能的增强现实智能头盔研制

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111935275A (zh) * 2020-08-06 2020-11-13 杭州巨骐信息科技股份有限公司 一种智能井盖的管理系统
CN112333491A (zh) * 2020-09-23 2021-02-05 字节跳动有限公司 视频处理方法、显示装置和存储介质
CN112333491B (zh) * 2020-09-23 2022-11-01 字节跳动有限公司 视频处理方法、显示装置和存储介质
CN113239446A (zh) * 2021-06-11 2021-08-10 重庆电子工程职业学院 一种室内信息量测方法及系统

Also Published As

Publication number Publication date
CN108132490A (zh) 2018-06-08

Similar Documents

Publication Publication Date Title
US11010965B2 (en) Virtual object placement for augmented reality
WO2018113759A1 (zh) 基于定位系统和ar/mr的探测系统及探测方法
US10083540B2 (en) Virtual light in augmented reality
US10062213B2 (en) Augmented reality spaces with adaptive rules
KR102493749B1 (ko) 동적 환경에서의 좌표 프레임의 결정
US9230368B2 (en) Hologram anchoring and dynamic positioning
CN102591449B (zh) 虚拟内容和现实内容的低等待时间的融合
KR102227229B1 (ko) 추적 및 맵핑 오차에 강한 대규모 표면 재구성 기법
JP6391685B2 (ja) 仮想オブジェクトの方向付け及び可視化
US8933931B2 (en) Distributed asynchronous localization and mapping for augmented reality
US20180046874A1 (en) System and method for marker based tracking
US20180182160A1 (en) Virtual object lighting
CN102419631A (zh) 虚拟内容到现实内容中的融合
KR20150093831A (ko) 혼합 현실 환경에 대한 직접 상호작용 시스템
CN105190703A (zh) 使用光度立体来进行3d环境建模
US20230252691A1 (en) Passthrough window object locator in an artificial reality system
US20220129228A1 (en) Synchronizing positioning systems and content sharing between multiple devices
Piérard et al. I-see-3d! an interactive and immersive system that dynamically adapts 2d projections to the location of a user's eyes
US11494997B1 (en) Augmented reality system with display of object with real world dimensions
US10621789B1 (en) Tracking location and resolving drift in augmented reality head mounted displays with downward projection
WO2022129646A1 (en) Virtual reality environment

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17882809

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.09.2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17882809

Country of ref document: EP

Kind code of ref document: A1