[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109976690B - AR glasses remote interaction method and device and computer readable medium - Google Patents

AR glasses remote interaction method and device and computer readable medium Download PDF

Info

Publication number
CN109976690B
CN109976690B CN201910239145.XA CN201910239145A CN109976690B CN 109976690 B CN109976690 B CN 109976690B CN 201910239145 A CN201910239145 A CN 201910239145A CN 109976690 B CN109976690 B CN 109976690B
Authority
CN
China
Prior art keywords
information
glasses
cloud server
sending
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910239145.XA
Other languages
Chinese (zh)
Other versions
CN109976690A (en
Inventor
孙鹏达
赵志昊
肖冰
徐驰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Unikom Beijing Technology Co Ltd
Original Assignee
Unikom Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unikom Beijing Technology Co Ltd filed Critical Unikom Beijing Technology Co Ltd
Priority to CN202010318402.1A priority Critical patent/CN111752511B/en
Priority to CN201910239145.XA priority patent/CN109976690B/en
Publication of CN109976690A publication Critical patent/CN109976690A/en
Application granted granted Critical
Publication of CN109976690B publication Critical patent/CN109976690B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • G06F3/1454Digital output to display device ; Cooperation and interconnection of the display device with other functional units involving copying of the display data of a local workstation or window to a remote workstation or window so that an actual copy of the data is displayed simultaneously on two or more displays, e.g. teledisplay

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Information Transfer Between Computers (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an AR glasses remote interaction method, an AR glasses remote interaction device and a computer readable medium, and relates to the technical field of augmented reality, wherein the method is applied to a cloud server, and the cloud server is in communication connection with AR glasses through a network; the method comprises the following steps: receiving first information, the first information being sent by the AR glasses; generating second information according to the first information; and sending the second information to the AR glasses. The invention can enable the AR glasses to be communicated with the cloud server through the network, thereby increasing the display content of the AR glasses, and enhancing the interaction with the outside and the interestingness of the user.

Description

AR glasses remote interaction method and device and computer readable medium
Technical Field
The invention relates to the technical field of augmented reality, in particular to an AR glasses remote interaction method, an AR glasses remote interaction device and a computer readable medium.
Background
The AR glasses have processing and display functions, like a small computer, and the processor can perform some simple functions such as calculation, transmission, etc. when the size allows, for example: the processor of the AR glasses integrates, renders, transmits, etc. the pictures. However, because the processor of the AR glasses has limited functions, the AR glasses use less resources and display contents are not rich. Meanwhile, the AR glasses are in a single state, and almost no interaction exists between the AR glasses and other external equipment, so that the interestingness of a user is reduced.
Disclosure of Invention
In view of the above, the present invention provides an AR glasses remote interaction method, an AR glasses remote interaction device, and a computer readable medium, which enable the AR glasses and a cloud server to communicate via a network, increase display contents of the AR glasses, and enhance interactivity with the outside and interestingness of a user.
In a first aspect, an embodiment of the present invention provides an AR glasses remote interaction method, which is applied to a cloud server, where the cloud server is in communication connection with the AR glasses through a network; the method comprises the following steps:
receiving first information, the first information being sent by the AR glasses;
generating second information according to the first information;
and sending the second information to the AR glasses.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the first information includes at least one of a service request, life flow information, real-world environment information, a virtual portrait, user information, 3D model information, update information, game information, video information, or interaction information;
the second information comprises at least one of life service information, game information, video information, third visual angle enhancement information or virtual portrait social information;
the interaction information comprises: at least one of visual interaction data, handle control interaction data, position interaction data, gesture interaction data, or voice interaction data;
the vital flow information includes at least one of location information or image information.
With reference to the first aspect, an embodiment of the present invention provides a second possible implementation manner of the first aspect, where the step of receiving the first information includes receiving the information about the vital flow sent by the AR glasses;
generating second information according to the first information comprises generating the life service information according to the life flow information;
the step of sending the second information to the AR glasses includes sending the lifestyle service information to the AR glasses.
With reference to the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the step of receiving the first information includes receiving a service request sent by the AR glasses;
generating second information according to the first information includes generating the game information or the video information according to the service request;
the step of sending the second information to the AR glasses includes sending the game information or the video information to the AR glasses.
With reference to the first aspect, an embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the step of receiving the first information includes receiving update information sent by the AR glasses; the updating information is used when the game data are stored in the AR glasses and need to be updated;
generating second information according to the first information comprises generating game information according to the updated information;
the step of sending the second information to the AR glasses includes sending the game information to the AR glasses.
With reference to the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where after the step of sending the game information to the AR glasses, the method further includes:
synchronizing or storing the fused data; the fusion data is obtained by fusing the interaction information of the user collected by the gesture recognition module of the AR glasses and the game information.
With reference to the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, wherein, when the cloud server is connected to at least two AR glasses,
the step of receiving the first information comprises the steps of receiving interactive information sent by one of the AR glasses, and receiving real-world environment information of another one of the AR glasses, wherein the real-world environment information of one of the AR glasses is collected by the other AR glasses;
generating second information according to the first information, wherein the step of generating the third visual angle enhancement information comprises generating the third visual angle enhancement information according to the interaction information and the environment information of the real world;
the step of sending the second information to the AR glasses includes sending the third perspective enhancement information to one of the AR glasses.
With reference to the first aspect, an embodiment of the present invention provides a seventh possible implementation manner of the first aspect, where when the cloud server is connected to at least two AR glasses, the method further includes:
the step of receiving the first information comprises receiving the 3D model information and/or game information and/or video information sent by one of the AR glasses;
the step of transmitting the first information includes transmitting the 3D model information and/or game information and/or video information to another one of the AR glasses.
With reference to the first aspect, an embodiment of the present invention provides an eighth possible implementation manner of the first aspect, wherein, when the cloud server is connected to at least two AR glasses,
the step of receiving the first information includes receiving the service information transmitted by each AR glasses;
generating second information according to the first information comprises generating the 3D model information and/or game information and/or video information according to the service information;
the step of sending the second information to the AR glasses comprises sending the 3D model information, and/or game information and/or video information to the at least two AR glasses.
With reference to the first aspect, an embodiment of the present invention provides a ninth possible implementation manner of the first aspect, where the method further includes:
receiving modification information sent by one of the AR glasses, wherein the modification information comprises modification of 3D model information, and/or modification of game data and/or modification of video data;
sending the modification information to each of the AR glasses.
With reference to the first aspect, an embodiment of the present invention provides a tenth possible implementation manner of the first aspect, wherein, when the cloud server is connected to at least two AR glasses,
the step of receiving the first information comprises the steps of receiving at least two virtual portraits and respective user information sent by the AR glasses; the respective user information is acquired by at least two pieces of AR glasses through an acquisition device;
the step of generating second information according to the first information comprises the steps of integrating the virtual portrait sent by each of at least two pieces of AR glasses and user information to obtain a plurality of virtual portraits with user information, and integrating the plurality of virtual portraits with the user information to generate virtual portrait social information;
the step of sending the second information to the AR glasses includes sending the virtual portrait social information to each of the AR glasses.
With reference to the first aspect, an embodiment of the present invention provides an eleventh possible implementation manner of the first aspect, where when one end of the cloud server is connected to an external device, and the other end of the cloud server is communicatively connected to the AR glasses through a network, the method further includes:
acquiring real-time data from external equipment;
generating third information according to the real-time data;
and sending the third information to the AR glasses.
With reference to the first aspect, an embodiment of the present invention provides a twelfth possible implementation manner of the first aspect, where the step of obtaining real-time data from an external device includes obtaining environmental data from the external device;
generating third information according to the real-time data comprises generating recommendation information according to the environment data;
the step of sending the third information to the AR glasses includes sending the recommendation information to the AR glasses.
With reference to the first aspect, an embodiment of the present invention provides a thirteenth possible implementation manner of the first aspect, where the step of acquiring real-time data from an external device includes acquiring image information and sound information from the external device;
generating third information according to the real-time data, wherein the step of generating the third information comprises generating a first live video according to the image information and the sound information;
the step of sending the third information to the AR glasses includes sending the live video to the AR glasses.
With reference to the first aspect, an embodiment of the present invention provides a fourteenth possible implementation manner of the first aspect, where the method further includes:
the step of acquiring real-time data from the external equipment comprises the steps of acquiring physiological index information, running state information or image information from the external equipment;
the step of sending real-time data includes sending the physiological index information, or the operation state information, or the image information to the AR glasses.
With reference to the first aspect, an embodiment of the present invention provides a fifteenth possible implementation manner of the first aspect, where when one end of the cloud server is connected to at least two pieces of AR glasses, and the other end of the cloud server is connected to an external device, the method further includes:
receiving glasses data, namely receiving external real image information collected by one AR (augmented reality) glasses;
generating fourth information according to the real-time data and the glasses data, integrating the external real image information and the physiological index information or the external real image information and the running state information to generate a second live broadcast live image; the content in the second live broadcast live video at least comprises the external real image information and the physiological index information or the running state information;
and sending fourth information to the AR glasses, and sending the second live broadcast live image to the other AR glasses.
With reference to the first aspect, an embodiment of the present invention provides a sixteenth possible implementation manner of the first aspect, where the step of generating fourth information according to the real-time data and the glasses data includes: integrating the external real image information to generate a virtual model, and integrating the virtual model and the physiological index information or the virtual model and the running state information to generate a third live broadcast live image; the content in the third live broadcast live image at least comprises the virtual model and the physiological index information or the virtual model and the running state information;
and sending fourth information to the AR glasses, and sending the third live broadcast live image to the other AR glasses.
With reference to the first aspect, an embodiment of the present invention provides a seventeenth possible implementation manner of the first aspect, where after the step of sending the fourth information to the AR glasses, the method further includes:
receiving the interactive information and the virtual model of one AR glasses, fusing the interactive information and the virtual model to generate fifth information, and sending the fifth information to the other AR glasses.
With reference to the first aspect, an embodiment of the present invention provides an eighteenth possible implementation manner of the first aspect, wherein, when the real-time data includes a plurality of images,
the step of generating third information according to the real-time data comprises the steps of composing and splicing a plurality of images to generate a real-time 3D portrait;
the step of sending the third information to the AR glasses includes sending the real-time 3D portrait to the at least two AR glasses.
With reference to the first aspect, an embodiment of the present invention provides a nineteenth possible implementation manner of the first aspect, wherein the step of composing and stitching a plurality of images to generate a real-time 3D portrait includes:
and the step of generating third information comprises the steps of analyzing the change condition of the image according to a plurality of images, determining a preset special effect, and combining the preset special effect with the real-time 3D portrait to generate image information.
In a second aspect, the embodiment of the invention further provides an AR glasses remote interaction method, which is applied to AR glasses, wherein the AR glasses are in communication connection with a cloud server through a network; the method comprises the following steps:
sending first information to the cloud server;
and receiving second information, and receiving the second information sent by the cloud server.
With reference to the second aspect, an embodiment of the present invention provides a first possible implementation manner of the second aspect, where the first information includes a service request, life flow information, real-world environment information, a virtual portrait, user information, 3D model information, update information, game information, video information, and interaction information;
the second information comprises life service information, game information, video information, third visual angle enhancement information and virtual portrait social information;
the interaction information comprises: visual interaction data, handle control interaction data, position interaction data, gesture interaction data and voice interaction data;
the life flow information includes position information or image information.
With reference to the second aspect, an embodiment of the present invention provides a second possible implementation manner of the second aspect, where the method includes:
storing the game data to the local before sending the first information;
the step of transmitting the first information includes: sending game updating information to the cloud server;
the step of receiving the second information comprises: and receiving the updating data corresponding to the game updating information so as to update the game data stored in the local area through the updating data.
With reference to the second aspect, an embodiment of the present invention provides a third possible implementation manner of the second aspect, where before sending the first information, interaction information of a user is collected through a gesture recognition module, and the interaction information and the game data are processed in a fusion manner;
the step of sending the first information includes sending the data after the fusion processing to the cloud server so that the cloud server can calculate, synchronize or store the data after the fusion processing.
With reference to the second aspect, an embodiment of the present invention provides a fourth possible implementation manner of the second aspect, where after the step of receiving the second information, the method further includes: and playing the received second information.
With reference to the second aspect, an embodiment of the present invention provides a fifth possible implementation manner of the second aspect, where after the step of receiving the second information, the method further includes: and fixing the 3D model information and/or game data and/or video data and/or virtual images and/or running state information from different external devices in a physical space of the AR glasses through a space positioning algorithm.
With reference to the second aspect, an embodiment of the present invention provides a sixth possible implementation manner of the second aspect, where the method further includes: and generating a virtual image, and sending the virtual image to the cloud server for sending the virtual image to another AR glasses.
In a third aspect, the embodiment of the present invention further provides an AR glasses remote interaction apparatus, which is applied to a cloud server, where the cloud server is in communication connection with the AR glasses through a network; the device comprises:
a receiving module, configured to receive first information, where the first information is sent by the AR glasses;
the generating module is used for generating second information according to the first information;
and the sending module is used for sending the second information to the AR glasses.
In a fourth aspect, an embodiment of the present invention further provides a computer-readable medium having non-volatile program code executable by a processor, where the program code causes the processor to execute the method described in any one of the embodiments of the first aspect.
In a fifth aspect, the embodiment of the invention further provides an AR glasses remote interaction device, which is applied to AR glasses, wherein the AR glasses are in communication connection with a cloud server through a network; the device comprises:
the sending module is used for sending first information and sending the first information to the cloud server;
and the receiving module is used for receiving second information and receiving the second information sent by the cloud server.
In a sixth aspect, the present invention further provides a computer-readable medium having non-volatile program code executable by a processor, where the program code causes the processor to execute the method according to any one of the second aspects.
The embodiment of the invention has the following beneficial effects: the cloud server can receive first information sent by the AR glasses, is in communication connection with the AR glasses through a network, generates second information according to the first information, and sends the second information to the AR glasses. The invention can enable the AR glasses to be communicated with the cloud server through the network, thereby increasing the display content of the AR glasses, and enhancing the interaction with the outside and the interestingness of the user.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a block diagram of an eyeglass remote interaction system provided in accordance with one embodiment of the present invention;
fig. 2 is a flowchart of an AR glasses remote interaction method applied in the glasses remote interaction system provided in an embodiment;
FIG. 3 is a block diagram of an eyeglass remote interactive system according to another embodiment of the present invention;
FIG. 4 is a block diagram of an eyeglass remote interactive system according to still another embodiment of the present invention;
fig. 5 is a flowchart of an AR glasses remote interaction method applied in a glasses remote interaction system provided in still another embodiment;
fig. 6 is a block diagram of an eyeglass remote interactive system according to still another embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
At present, because the function of the processor carried by the AR glasses is limited, the resource used by the AR glasses is less, and the display content is not rich. Meanwhile, the AR glasses are in a single state, and almost no interaction exists between the AR glasses and other external equipment, so that the interestingness of a user is reduced. Based on this, the AR glasses remote interaction method, the AR glasses remote interaction device and the computer readable medium provided by the embodiment of the invention can enable the AR glasses to interact with the cloud server through the network, so that the display content of the AR glasses is increased, and the interaction with the outside and the interestingness of the user are enhanced.
For the purpose of facilitating an understanding of the present invention, the present invention will be described in detail by the following four examples.
The embodiment provides a first AR glasses remote interaction system, and the AR glasses remote interaction system comprises AR glasses and a cloud server. As an example, referring to fig. 1, the AR glasses remote interaction system includes one AR glasses 10, one AR glasses 10 performs network communication with the cloud server 20 through a communicator, and one AR glasses 10 performs remote interaction with the cloud server 20.
The cloud server 20 has massive data information stored in various formats and has computing power which is several times stronger than that of a terminal, and the method can enable the AR glasses to acquire the data information as content in a wireless transmission mode or transmit the data acquired by the AR glasses to the cloud in a wireless mode to perform fusion operation by utilizing computing power of the cloud. The number of the servers in the cloud server is one or more. That is, the cloud server 20 may be any device having a certain computing capability and content storage in a broad sense, even a single or multiple mobile phones and PCs. The cloud server 20 is generally considered as a centralized computing and storage platform, a server or a server cluster, and terminal cloud computing devices distributed around the world, because for an augmented reality system, data traffic required to be stored and interacted is large, and requirements on computing capability and computing real-time performance of the computing platform are high. AR glasses are typically augmented reality glasses devices with mainstream hardware configuration and data acquisition capabilities.
The communication technology that the communicator adopted can be wireless connection or the mode of wired connection network for through network communication between AR glasses and the high in the clouds server, can use different wireless transmission schemes simultaneously according to the data bandwidth demand of difference, with this increase AR glasses show the variety of content, improved the interest. For example, some simple audio and video browsing applications can be implemented through the current 4G or Wifi; some severe augmented reality scene applications may require high bandwidth communication standard implementations such as WiGig or the new generation of 5G.
Based on the AR glasses remote interaction system provided in fig. 1, shown in fig. 2, a method for AR glasses remote interaction applied in the system shown in fig. 1 is shown, and the method includes:
s110: the AR glasses send first information, and the first information is sent to the cloud server.
Wherein the first information includes at least one of a service request, a life flow information, a real-world environment information, a virtual portrait, a user information, a 3D model information, an update information, a game information, a video information, or an interactive information. The vital flow information includes at least one of location information or image information. The interaction information comprises: at least one of visual interaction data, handle control interaction data, position interaction data, gesture interaction data, or voice interaction data.
The service request can be generated according to the demand information acquired by the acquisition device, wherein the acquisition device of the AR glasses can include a voice acquisition device, a video acquisition device, a handle control device, or the like, the demand information of the user is input into the AR glasses through the gesture, the voice, the position, and the like of the user, and the service request is generated according to the demand information in the AR glasses.
The service request sent by the AR glasses may request the cloud server to perform some computation requests except that the service request is generated according to the requirement information of the user, for example: in the same operation process which cannot be processed by the processor of the AR glasses, the first information is generated and sent to the cloud server to request the cloud server to perform some operations, and the cloud server sends the operation result to the AR glasses, so that the operation burden of the AR glasses is reduced.
S120: the cloud server receives first information, and the first information is sent by the AR glasses.
S130: and the cloud server generates second information according to the first information. Wherein the second information comprises at least one of life service information, game information, video information, third perspective enhancement information or virtual portrait social information.
S140: and the cloud server sends the second information to the AR glasses.
S150: and the AR glasses receive second information sent by the cloud server.
The AR glasses remote interaction method provided by the embodiment can realize a plurality of application scenarios:
for example: the application in life recommendation:
the step that the cloud server receives the first information comprises the step that the life flow information sent by the AR glasses is received;
the step that the cloud server generates second information according to the first information comprises the step that life service information is generated according to the life flow information;
the step of the cloud server sending the second information to the AR glasses comprises sending the life service information to the AR glasses.
Specifically, the method comprises the following steps: the user wears the AR glasses, and the AR glasses integrate the current position information, the 'life stream' information (where, what to see, etc.) video information of the user collected through an image data acquisition module of the AR glasses, and the like into an interaction behavior request to be sent to the cloud server; the cloud server analyzes 'life stream' information uploaded by the AR glasses, generates interactive content related to the related life service content and sends the interactive content to the AR glasses through the network. For example, when a user is shopping, the glasses recommend the location of a frequent brand storefront to the user through interaction with the cloud.
For example, the application is in gaming:
the step that the cloud server receives the first information comprises the step that a service request sent by AR glasses is received;
the step of generating second information by the cloud server according to the first information comprises generating the game information according to the service request;
the step of the cloud server sending the second information to the AR glasses comprises sending the game information to the AR glasses.
Optionally, before the AR glasses send the first information, the game data is stored locally;
the step of the AR glasses transmitting the first information includes: sending game updating information to the cloud server;
the step that the cloud server receives the first information comprises the step that updating information sent by the AR glasses is received; the updating information is used when the game data are stored in the AR glasses and need to be updated;
the step that the cloud server generates second information according to the first information comprises the step that game information is generated according to the updating information;
the step of the cloud server sending the second information to the AR glasses comprises sending the game information to the AR glasses.
The step of the AR glasses receiving the second information includes: and receiving the updating data corresponding to the game updating information so as to update the game data stored in the local area through the updating data.
That is, the AR glasses may store game data to the local; establishing connection with a cloud server, and downloading updated data to the AR glasses; and upgrading the game data by updating the data.
Further, the AR glasses fix the game data in the physical space of the AR glasses through a spatial localization algorithm.
In some embodiments, after the step of sending the game information to the AR glasses, the method further comprises:
before the AR glasses send the first information, acquiring interaction information of a user through a gesture recognition module, and fusing and processing the interaction information and the game data;
the step of sending the first information by the AR glasses comprises the step of sending the data subjected to fusion processing to the cloud server so that the cloud server can calculate, synchronize or store the data subjected to fusion processing.
The cloud server synchronizes or stores the fusion data; the fusion data is obtained by fusing the interaction information of the user collected by the gesture recognition module of the AR glasses and the game information.
It can be understood that: the user wears the AR glasses and plays a stand-alone version of the game (similar to an angry bird); through network communication with the cloud server, the AR glasses send out an interaction behavior request for requesting downloading of game content, updating, game pictures of the main body and the like, and after the interaction behavior request is received by the cloud server, interaction content of data packets about downloading of the game content, the updating, the game pictures of the main body and the like is generated and sent to the AR glasses and stored in a storage module of the AR glasses. Further, the AR glasses can be interactively operated with the cloud server during game playing, and information is fed back. The gesture recognition module of AR glasses can discern user's mutual information, for example the gesture, calculates gesture data, packs into data packet or other file forms with mutual information and game information, sends the data packet to high in the clouds server, and high in the clouds server fuses the calculation to this data, discerns gesture data's meaning, for example: the user utilizes the gesture to raise to represent the jump in the game, sends the interactive content related to the jump to the AR glasses for synchronous operation, and simultaneously stores the interactive behavior in the cloud server.
The method is applied to augmented reality film watching:
the step that the cloud server receives the first information comprises the step that a service request sent by AR glasses is received;
the step of generating second information by the cloud server according to the first information comprises generating the video information according to the service request;
the step of sending the second information to the AR glasses by the cloud server comprises sending the video information to the AR glasses.
And the AR glasses play the received second information.
Further, the AR glasses fix the video data in the physical space of the AR glasses through a spatial localization algorithm.
Examples based on the above are: the user wears the AR glasses, and the AR glasses are connected to the cloud server through the wireless connection module; the AR glasses send an interaction behavior request requesting for calling the video resources in the cloud server, the cloud server sends the video resources to the AR glasses, and the AR glasses store the video resources in the storage module so as to be watched at any time or play the video resources stored in the cloud server in real time; the video screen can move along with the visual field of the user and can also be fixed in the space through an SLAM algorithm, and an augmented reality cinema environment is constructed in front of the eyes of the user.
The embodiment further provides a second AR glasses remote interaction system, which is shown in fig. 3, and the AR glasses remote interaction system includes a plurality of AR glasses 10(1 to n AR glasses) and a cloud server 20, where the 1 to n AR glasses 10 are respectively in network communication with the cloud server 20, so as to perform remote sharing interaction through the cloud server 20. The AR glasses, the cloud server, and the communication method in the AR glasses remote interaction system provided in this embodiment are the same as those in the first embodiment, and for reducing repeated description, the description in the first embodiment may be referred to for the AR glasses, the cloud server, and the communication method in the AR glasses remote interaction system provided in this embodiment.
Based on the system of fig. 3, a number of application scenarios are implemented:
for example, applying the third perspective augmented reality recording:
the step of receiving the first information by the cloud server comprises the steps of receiving interaction information sent by one of the AR glasses, and receiving real-world environment information of the other AR glasses, wherein the real-world environment information of one of the AR glasses is collected by the other AR glasses;
the step of generating second information by the cloud server according to the first information comprises generating third visual angle enhancement information according to the interaction information and the real world environment information;
the step of the cloud server sending the second information to the AR glasses includes sending the third perspective enhancement information to one of the AR glasses.
Specifically, the user 1 wears AR glasses and interacts with virtual images in the visual field to a certain extent (for example, watching, handle control interaction, position interaction, gesture interaction, voice interaction, etc.); the user 2 wears another AR (augmented reality) glasses and can acquire the interaction information (including the position information change of the user 1, the interaction information between the user 1 and the virtual object and the like) of the user 1 through the cloud server; an image acquisition module in AR glasses worn by the user 2 captures real world images including the user 1; and fusing the real image captured by the image acquisition module in the AR glasses worn by the user 2 and the virtual object acquired from the cloud server at the cloud server, and transmitting the fused visual result to the AR glasses of the user 2 and/or the AR glasses of the user 1. For the AR glasses of user 2, an augmented reality "shot" recording of user 1 is achieved.
In multi-person collaborative implementation applying 3D design: it can be obtained in two ways:
first, one AR glasses user shares 3D model information to another AR glasses user, and the specific steps are: the step of receiving the first information by the cloud server comprises receiving the 3D model information sent by one of the AR glasses;
the step of sending the first information by the cloud server includes sending the 3D model information to another piece of AR glasses.
Secondly, one AR glasses user requests 3D model information to the cloud server to at least one AR glasses user, and the specific steps are as follows: the step that the cloud server receives the first information comprises the step that the service information sent by each AR glasses is received;
the step of generating second information by the cloud server according to the first information comprises generating the 3D model information according to the service information;
the step of the cloud server sending the second information to the AR glasses comprises sending the 3D model information to the at least two AR glasses.
Further, in the two manners, the cloud server may further receive modification information sent by one of the AR glasses, where the modification information includes modification information of the 3D model;
and the cloud server sends the modification information to each AR glasses.
And each AR glasses fixes the 3D model information in the physical space of each AR glasses through a space positioning algorithm.
Specifically, users 1 and 2 (which may be in the same physical space or different physical spaces) both wear mixed reality glasses, a 3D model is presented in the visual field of the user 1, the 3D model is simultaneously sent to a cloud server, the cloud server sends the 3D model to the user 2, so that the same 3D model appears in the visual fields of the two people, or the user 1 and the user 2 request the cloud server to generate the 3D model, the generated 3D model is sent to the user 1 and the user 2, and the models are fixed at fixed positions in respective physical spaces through a SLAM algorithm; the user 1 edits and modifies the 3D model in the physical space (usually through gesture recognition), the operation information is captured by a depth recognition module and an image acquisition module at the glasses end of the user 1, uploaded to a cloud server and then transmitted to the glasses of the user 2, and synchronously imaged in the visual field of the user 2 in real time, so that collaborative editing of the same design by multiple people is realized.
Applying an augmented reality competitive game: similar to the multi-person collaborative implementation applying 3D design, it can be obtained in two ways:
firstly, one AR glasses user shares game information with another AR glasses user, and the specific steps are as follows: the step of receiving the first information by the cloud server comprises the step of receiving the game information sent by one of the AR glasses;
the step of sending the first information by the cloud server comprises sending the game information to the other AR glasses.
Secondly, one AR glasses user requests game information to the cloud server to at least one AR glasses user, and the specific steps are as follows: the step that the cloud server receives the first information comprises the step that the service information sent by each AR glasses is received;
the step of generating second information by the cloud server according to the first information comprises generating the game information according to the service information;
the step of the cloud server sending the second information to the AR glasses includes sending the game information to the at least two AR glasses.
Further, in the two manners, the cloud server may further receive modification information sent by one of the AR glasses, where the modification information includes game data modification;
and the cloud server sends the modification information to each AR glasses.
The method further comprises the steps of: and space positioning, namely fixing the game data in the physical space of each AR glasses through a space positioning algorithm.
Specifically, users 1 and 2 (which may be in the same physical space or different physical spaces) both wear AR glasses, one of the users 1 and 2 requests the same sand table chessboard, or one of the users 1 and 2 sends its sand table chessboard to the other person through a cloud server, the same sand table chessboard appears in the visual field of the two persons (which is obtained from the cloud server from time to time), and the sand table chessboard is fixed at a fixed position in the respective physical space through a SLAM algorithm; AR glasses that user 1 wore are through carrying out gesture recognition, sight tracking, touch-control etc. operation mode's discernment to user 1, and the piece on the control sand table chess board removes, should remove the operation and upload to high in the clouds server will remove the operation and transmit to AR glasses that user 2 wore. The same is true for a plurality of AR glasses worn simultaneously, thereby realizing chess and card type competitive games. The scheme is also suitable for realizing any multi-player real-time game, such as 'Rong Yang'.
Application in virtual portrait socialization:
the step that the cloud server receives the first information comprises the steps that at least two pieces of AR glasses are received, and respective virtual portrait and respective user information are sent; the respective user information is acquired by at least two pieces of AR glasses through an acquisition device;
the cloud server generates second information according to the first information, wherein the step of generating the second information comprises the steps of integrating the virtual portrait sent by each of at least two pieces of AR glasses and the user information to obtain a plurality of virtual portraits with the user information, and integrating the plurality of virtual portraits with the user information to generate virtual portrait social information;
the step of sending the second information to the AR glasses by the cloud server comprises sending the virtual portrait social information to each of the AR glasses.
The AR glasses fix the virtual image in the physical space of the AR glasses through a spatial positioning algorithm.
Further, the method comprises the following steps: and modifying and synchronizing information, namely modifying the user information in one of the AR glasses, and synchronizing the modification result to the other AR glasses.
Specifically, users 1 and 2, or even user n, in different physical spaces, pre-construct respective virtual figures through systems on respective glasses; the virtual portrait may contain the current relative position, expression, and action information of the user, and functions (e.g., sending voice, barrage, expression, etc.) combined with the current scene of each user; the two parts of information are uploaded to a cloud server through user glasses, chat information uploaded by one or more AR glasses is received through the cloud server, the chat information is integrated and sent to each user, real-time information synchronization of the AR glasses sent by each user is completed, and therefore the user 1 can see state changes of other user virtual portraits in the front, and social communication among the virtual portraits is achieved.
In application of multi-person film watching:
it can be obtained in two ways:
firstly, one AR glasses user shares video information with another AR glasses user, and the specific steps are as follows: the step of receiving the first information by the cloud server comprises receiving the video information sent by one of the AR glasses;
the step of sending the first information by the cloud server comprises sending the video information to the other AR glasses.
Secondly, one AR glasses user requests video information to the cloud server to at least one AR glasses user, and the specific steps are as follows: the step that the cloud server receives the first information comprises the step that the service information sent by each AR glasses is received;
the step of generating second information by the cloud server according to the first information comprises generating the video information according to the service information;
the step of the cloud server sending the second information to the AR glasses comprises sending the video information to the at least two AR glasses.
Further, in the two manners, the cloud server may further receive modification information sent by one of the AR glasses, where the modification information includes modified video data;
and the cloud server sends the modification information to each AR glasses.
Further, the method comprises the following steps:
the AR glasses generate a virtual image, send the virtual image to the cloud server,
the cloud server sends the virtual image to the other AR glasses;
and the other AR glasses fix the video data and the virtual image in the same visual field through a spatial positioning algorithm.
Specifically, a user 1, a user 2 and even a user n (which can be in the same physical space or different physical spaces) wear AR glasses, a screen appears in the visual field of each user, the screen is fixed at a fixed position in each physical space through an SLAM algorithm, and the same multimedia content is synchronously called and played from a cloud server, so that the multi-user shared viewing experience is realized; in the film watching process, the user 1 can make a real-time response to the content (for example, send information such as expressions, barrage, voice, comments and the like to other users) through operation, and send the operation to the cloud server, and the cloud server processes the operation information and then sends the operation information to a plurality of users, and the operation information is presented in the visual field of the users; or the virtual portrait of each user side can be combined, the virtual portrait of other users is transmitted to the user 1 through the cloud server, and the user 1 displays the virtual portrait of other users in the view according to the visual environment in front of the user by using the SLAM algorithm.
The embodiment further provides a third kind of AR glasses remote interaction system, as shown in fig. 4, the AR glasses remote interaction system includes one AR glasses 10, a cloud server 20 and an external device, the one AR glasses 10 performs networking communication with the cloud server 20 through a communicator, the external device is connected with the cloud server 20, the external device is used for acquiring real-time data of an object to be acquired, and the real-time data is sent to the AR glasses 10 through the cloud server 20. The real-time data collected by the external equipment can be information such as images, sounds, temperatures and humidity, so that the data provided by the cloud server is not only the data stored in the cloud server, but also can be acquired by the cloud server, and the real-time data is sent to the AR glasses after being subjected to fusion operation as required, so that the real-time perception and even reconstruction of the remote site environment can be realized by the real user. The AR glasses, the cloud server, and the communication method in the AR glasses remote interaction system provided in this embodiment are the same as those in the first embodiment, and for reducing repeated description, the description in the first embodiment may be referred to for the AR glasses, the cloud server, and the communication method in the AR glasses remote interaction system provided in this embodiment.
Based on the AR glasses remote interaction system provided in fig. 4, shown in fig. 5, a method for AR glasses remote interaction is shown, the method includes:
s510: the cloud server acquires real-time data from external equipment;
s520: the cloud server generates third information according to the real-time data;
s530: and the cloud server sends the third information to the AR glasses.
The AR glasses remote interactive system provided by the embodiment can realize a variety of application scenarios,
the step that the cloud server acquires real-time data from the external equipment comprises the step that environmental data are acquired from the external equipment;
the step of generating third information by the cloud server according to the real-time data comprises generating recommendation information according to the environment data;
the step of sending the third information to the AR glasses by the cloud server comprises sending the recommendation information to the AR glasses.
For example, in application smart home construction:
the method comprises the following steps that an environment data sensor on each intelligent household device collects environment data (for example, an air quality sensor in a room detects PM2.5 content, a temperature sensor collects room temperature, a light sensor senses room brightness and the like), and uploads the environment data to a cloud server; the cloud server performs integration analysis operation on the uploaded information to form recommendation information (indicating user operation or other types), and pushes the recommendation information to AR glasses sending data requests; when the user wears the AR glasses, recommendation information prompts pushed by the cloud server can appear in the visual field, for example, when the user turns on the light in dark, the air purifier is turned on when the air pollution concentration in a room exceeds the standard, and the like.
The application is in mixed reality live:
the step that the cloud server acquires real-time data from the external equipment comprises the steps of acquiring image information and sound information from the external equipment;
the step of generating third information by the cloud server according to the real-time data comprises generating a first live image according to the image information and the sound information;
the step of sending the third information to the AR glasses by the cloud server comprises sending the live broadcast live image to the AR glasses.
Specifically, the information of the live broadcast site can be collected through equipment such as an image collector, a sound collector and the like; the information is uploaded to a cloud server, and the cloud server integrates and processes the information and then sends the information to the AR glasses; the user wears AR glasses and personally watches live broadcast content.
The method is applied to multi-doctor remote medical treatment, industrial remote assistance or global communication:
the method comprises the steps that the cloud server acquires real-time data from external equipment, wherein the steps comprise acquiring physiological index information, running state information or image information from the external equipment;
the step of sending real-time data by the cloud server comprises the step of sending the physiological index information, the running state information or the image information to the AR glasses.
As shown in fig. 6, the AR glasses remote interaction system includes a plurality of AR glasses 10(1 to n AR glasses), a cloud server 20, and a collector. The 1 to n AR glasses 10 are respectively in networking communication with the cloud server 20 so as to perform remote sharing interaction through the cloud server 20, and the collector is connected with the cloud server 20. The AR glasses, the cloud server, and the communication method in the AR glasses remote interaction system provided in this embodiment are the same as those in the first embodiment, and for reducing repeated description, the description in the first embodiment may be referred to for the AR glasses, the cloud server, and the communication method in the AR glasses remote interaction system provided in this embodiment. The collector in the AR glasses remote interaction system provided in this embodiment is the same as that in the third embodiment, and for reducing repeated description, the collector in the AR glasses remote interaction system provided in this embodiment may refer to the description in the third embodiment.
Based on the AR glasses remote interaction system provided in fig. 6, the AR glasses remote interaction method provided in this embodiment may implement many application scenarios,
for example, in multi-doctor telemedicine:
the step that the cloud server acquires real-time data from the external equipment comprises the steps of acquiring physiological index information from the external equipment;
the step of sending real-time data by the cloud server comprises the step of sending the physiological index information to the AR glasses.
The method comprises the steps that a cloud server receives glasses data, and receives outside real image information collected by one AR glasses;
the cloud server generates fourth information according to the real-time data and the glasses data, integrates the external real image information and the physiological index information, and generates a second live broadcast live image; the content in the second live broadcast live video at least comprises the external real image information and the physiological index information;
and the cloud server sends fourth information to the AR glasses, and the second live broadcast live image is sent to the other AR glasses.
Further, the step of generating fourth information by the cloud server according to the real-time data and the glasses data includes: integrating the external real image information to generate a virtual model, and integrating the virtual model and the physiological index information to generate a third live broadcast live image; the content in the third live broadcast live image at least comprises the virtual model and the physiological index information;
and the cloud server sends fourth information to the AR glasses, and sends the third live broadcast live image to the other AR glasses.
Further, the method further comprises:
the cloud server receives the interaction information and the virtual model of one AR glasses, fuses the interaction information and the virtual model to generate fifth information, and sends the fifth information to the other AR glasses.
As an example, a patient is monitored in real time in an operating room by a wearable device or other kinds of physiological information collectors (external devices) (e.g., data 1-n of electrocardiogram, pulse, blood pressure, body temperature, etc.), and the data is uploaded to a cloud server in real time; the cloud server sends the physiological index information to a remote doctor wearing the AR glasses, and the remote doctor can assist the on-site doctor wearing the AR glasses to diagnose and cure the patient according to the physiological index information. Further, the external real image information of the patient and the peripheral medical conditions thereof collected by the on-site doctor through the worn mixed reality glasses is transmitted to the cloud server in real time in the form of video or 3D scanning image, the cloud server integrates the external real image information and the physiological index information to generate a live broadcast on-site image, and the live broadcast on-site image is sent to a remote doctor; the AR glasses of the remote doctor present relevant information of a patient in an operating room, observe the operation process in real time, and simultaneously collect the interaction information (such as gestures, handles, eyeball recognition and voice) of the remote doctor through the recognition module of the AR glasses, so as to provide assistance for the doctor on site; the to-be-processed part of a patient in an operating room can be uploaded to a cloud server in the form of external real image information, the cloud server carries out modeling reconstruction on the image and transmits the image to a remote doctor in the form of an operable virtual model, the remote doctor carries out exemplary treatment operation actions through AR glasses (in the modes of gesture recognition, image recognition, depth recognition and the like), then a series of operation actions are uploaded to the cloud server in combination with the real-time operated virtual model, the interaction information and the virtual model are transmitted to the cloud server, and the cloud server integrates the operation processing and then transmits the virtual video converted form (fourth information) to the on-site doctor. The remote doctor can be a plurality of doctors wearing the mixed reality glasses at different places, and the remote doctor can also be a plurality of doctors wearing the mixed reality glasses on site.
Application to industrial remote assistance:
the step that the cloud server acquires real-time data from the external equipment comprises the step that running state information is acquired from the external equipment;
the step of sending real-time data by the cloud server comprises the step of sending the running state information to the AR glasses.
The method comprises the steps that a cloud server receives glasses data, and receives outside real image information collected by one AR glasses;
the cloud server generates fourth information according to the real-time data and the glasses data, integrates the external real image information and the running state information, and generates a second live broadcast live image; the content in the second live broadcast live video at least comprises the outside real image information and the running state information;
and the cloud server sends fourth information to the AR glasses, and the second live broadcast live image is sent to the other AR glasses.
Further, the step of generating fourth information by the cloud server according to the real-time data and the glasses data includes: integrating the external real image information to generate a virtual model, and integrating the virtual model and the running state information to generate a third live broadcast live image; the content in the third live video at least comprises the virtual model and the running state information;
and the cloud server sends fourth information to the AR glasses, and sends the third live broadcast live image to the other AR glasses.
Further, the method further comprises:
the cloud server receives the interaction information and the virtual model of one AR glasses, fuses the interaction information and the virtual model to generate fifth information, and sends the fifth information to the other AR glasses.
The AR glasses fix the running state information from different external devices in the physical space of the AR glasses through a space positioning algorithm.
Specifically, the industrial production line equipment comprises a plurality of mechanical equipment operation collectors, a cloud server, a plurality of display screens and a plurality of communication terminals, wherein the mechanical equipment operation collectors acquire a plurality of pieces of operation state information (data 1-n, such as temperature, part use state and damage condition) of the equipment, the operation state information is uploaded to the cloud server, the cloud server sends the operation state information to AR glasses worn by an equipment monitor, the display screens are positioned through an SLAM algorithm, and the various pieces of operation state information corresponding to different equipment are displayed on the display screens; the cloud server analyzes and calculates the running state information in real time, and after abnormality occurs at a certain time, the cloud server sends alarm information (third information) to the AR glasses, and the AR glasses prompt abnormal state information for a monitor and prompt maintenance; when a field worker maintains the equipment, the worker can wear the AR glasses, and the image acquisition module on the AR glasses acquires external real image information such as field maintenance process information and the like and uploads the external real image information to the cloud server in real time; the cloud server transmits live images such as maintenance process information and the like to a remote expert wearing AR glasses, and the remote expert provides maintenance guidance for field workers through interactive information such as voice, image marks and the like; the maintenance process or the damaged part of the equipment can also be uploaded to a cloud server in the form of external real image information such as real video or image information, the cloud server carries out modeling reconstruction on the video or the image and transmits the video or the image to a remote expert in the form of an operable virtual model, the remote expert carries out exemplary maintenance operation actions in AR glasses (in the modes of gesture recognition, image recognition, depth recognition and the like), then a series of operation actions are combined with the virtual model which is operated at any time and are uploaded to the cloud server, and the cloud server integrates the operation processing and then converts the operation actions into a virtual video form (fourth information) to be transmitted to an operator on site.
The application of the holographic communication comprises the following steps:
when the real-time data comprises a plurality of images,
the cloud server generates third information according to the real-time data, wherein the third information comprises the steps of composing and splicing a plurality of images to generate a real-time 3D portrait;
the step of sending the third information to the AR glasses by the cloud server comprises sending the real-time 3D portrait to the at least two AR glasses.
Wherein, carry out composition and concatenation with a plurality of images, generate the step of real-time 3D portrait, include:
the step of generating the third information by the cloud server comprises the steps of analyzing image change conditions according to a plurality of images, determining a preset special effect, and combining the preset special effect with the real-time 3D portrait to generate image information.
Specifically, a user 1, a user 2 and even a user n in different physical spaces upload a plurality of images or videos at different angles to a cloud server through image acquisition module arrays (acquiring image data 1-n) in respective environments (different positions), and the cloud server forms respective real-time 3D portraits through algorithms such as composition and splicing; the real-time 3D portraits are mutually transmitted to other users through the cloud server, the users see the holographic portraits (including real-time actions) of the other users in front of eyes, and the real-time 3D portraits are fixed at the positions of respective spaces through an SLAM algorithm; in addition, the cloud server can further perform operation analysis on a portrait image or a video input in real time, when the portrait performs a specific action, the cloud server triggers to generate a specific effect, for example, when the user 1 performs a hand opening gesture, the user 2 sees the portrait action of the user 1 opening the hand, and meanwhile, the cloud server triggers a preset special effect of money scattering according to a program to be presented in front of the user 1 or the user 2.
The embodiment of the invention also provides an AR glasses remote interaction device which is applied to a cloud server, wherein the cloud server is in communication connection with the AR glasses through a network; the device comprises:
a receiving module, configured to receive first information, where the first information is sent by the AR glasses;
the generating module is used for generating second information according to the first information;
and the sending module is used for sending the second information to the AR glasses.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments.
Embodiments of the present invention also provide a computer readable medium having non-volatile program code executable by a processor, the program code causing the processor to perform any of the methods of the embodiments. The method is applied to the cloud server.
The embodiment of the invention also provides an AR glasses remote interaction device, which is applied to the AR glasses, wherein the AR glasses are in communication connection with the cloud server through a network; the device comprises:
the sending module is used for sending first information and sending the first information to the cloud server;
and the receiving module is used for receiving second information and receiving the second information sent by the cloud server.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiments, and for the sake of brief description, reference may be made to the corresponding contents in the method embodiments without reference to the device embodiments.
Embodiments of the present invention also provide a computer readable medium having a non-volatile program code executable by a processor, where the program code causes the processor to execute any one of the methods described in the above embodiments. The method is applied to a cloud server or AR glasses.
Unless specifically stated otherwise, the relative steps, numerical expressions, and values of the components and steps set forth in these embodiments do not limit the scope of the present invention.
In all examples shown and described herein, any particular value should be construed as merely exemplary, and not as a limitation, and thus other examples of example embodiments may have different values.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (28)

1. The AR glasses remote interaction method is applied to a cloud server, and the cloud server is in communication connection with the AR glasses through a network; the method comprises the following steps:
the cloud server receives first information, wherein the first information is sent by the AR glasses;
the cloud server generates second information according to the first information;
the cloud server sends the second information to the AR glasses;
the first information comprises at least one of a service request, life flow information, real world environment information, virtual portrait, user information, 3D model information, update information, game information, video information or interaction information;
the second information comprises at least one of life service information, game information, video information, third visual angle enhancement information or virtual portrait social information;
the interaction information comprises: at least one of visual interaction data, handle control interaction data, position interaction data, gesture interaction data, or voice interaction data;
the vital flow information includes at least one of location information or image information;
when the cloud server is connected to at least two AR glasses,
the step of receiving the first information by the cloud server comprises receiving interactive information sent by one of the AR glasses, and receiving real-world environment information of the other AR glasses, wherein the real-world environment information of one of the AR glasses is collected by the other AR glasses;
the step of generating second information by the cloud server according to the first information comprises generating third visual angle enhancement information according to the interaction information and the real world environment information;
the step of sending the second information to the AR glasses by the cloud server includes sending the third perspective enhancement information to one of the AR glasses.
2. The method of claim 1,
the step of receiving the first information by the cloud server comprises receiving the life flow information sent by the AR glasses;
the step of generating second information by the cloud server according to the first information comprises generating the life service information according to the life flow information;
the step of the cloud server sending the second information to the AR glasses comprises sending the life service information to the AR glasses.
3. The method of claim 1,
the step of receiving the first information by the cloud server comprises receiving a service request sent by AR glasses;
the step of generating second information by the cloud server according to the first information comprises generating the game information or the video information according to the service request;
the step of sending the second information to the AR glasses by the cloud server includes sending the game information or the video information to the AR glasses.
4. The method of claim 1,
the step of receiving the first information by the cloud server comprises receiving update information sent by AR glasses; the updating information is used when the game data are stored in the AR glasses and need to be updated;
the step of generating second information by the cloud server according to the first information comprises generating game information according to the updating information;
the step of the cloud server sending the second information to the AR glasses comprises sending the game information to the AR glasses.
5. The method of claim 3 or 4, wherein after the step of sending the game information to the AR glasses, the method further comprises:
synchronizing or storing the fused data; the fusion data is obtained by fusing the interaction information of the user collected by the gesture recognition module of the AR glasses and the game information.
6. The method of claim 1, wherein when the cloud server is connected to at least two AR glasses, the method further comprises:
the step of receiving the first information by the cloud server comprises receiving the 3D model information and/or game information and/or video information sent by one of the AR glasses;
the step of the cloud server sending the second information to the AR glasses includes sending the 3D model information and/or game information and/or video information to another of the AR glasses.
7. The method of claim 1, wherein when the cloud server is connected to at least two AR glasses,
the step of receiving the first information by the cloud server comprises receiving the service request sent by each AR glasses;
the step of generating second information by the cloud server according to the first information comprises generating the 3D model information and/or game information and/or video information according to the service request;
the step of the cloud server sending the second information to the AR glasses includes sending the 3D model information, and/or game information and/or video information to the at least two AR glasses.
8. The method according to claim 6 or 7, characterized in that the method further comprises:
receiving modification information sent by one of the AR glasses, wherein the modification information comprises modification of 3D model information, and/or modification of game data and/or modification of video data;
sending the modification information to each of the AR glasses.
9. The method of claim 1, wherein when the cloud server is connected to at least two AR glasses,
the step of receiving the first information by the cloud server comprises receiving respective virtual portrait and respective user information sent by at least two pieces of AR glasses; the respective user information is acquired by at least two pieces of AR glasses through an acquisition device;
the cloud server generates second information according to the first information, wherein the step of generating the second information comprises the steps of integrating the virtual portrait and the user information sent by each of at least two pieces of AR glasses to obtain a plurality of virtual portraits with the user information, and integrating the plurality of virtual portraits with the user information to generate virtual portrait social information;
the step of sending the second information to the AR glasses by the cloud server comprises sending the virtual portrait social information to each of the AR glasses.
10. The method of claim 1, wherein when one end of the cloud server is connected to an external device and the other end of the cloud server is communicatively connected to the AR glasses via a network, the method further comprises:
acquiring real-time data from external equipment;
generating third information according to the real-time data;
and sending the third information to the AR glasses.
11. The method of claim 10,
the step of acquiring real-time data from the external device includes acquiring environmental data from the external device;
generating third information according to the real-time data comprises generating recommendation information according to the environment data;
the step of sending the third information to the AR glasses includes sending the recommendation information to the AR glasses.
12. The method of claim 10,
the step of acquiring real-time data from the external device includes acquiring image information and sound information from the external device;
generating third information according to the real-time data, wherein the step of generating the third information comprises generating a first live video according to the image information and the sound information;
the step of sending the third information to the AR glasses includes sending the first live video to the AR glasses.
13. The method of claim 10, further comprising:
the step of acquiring real-time data from the external equipment comprises the steps of acquiring physiological index information, running state information or image information from the external equipment;
the step of sending real-time data includes sending at least one of the physiological indicator information, the operational state information, or the image information to the AR glasses.
14. The method of claim 13, wherein when one end of the cloud server is connected to at least two AR glasses and the other end of the cloud server is connected to an external device, the method further comprises:
a step of receiving glasses data, comprising: receiving outside real image information collected by one of the AR glasses;
generating fourth information according to the real-time data and the glasses data, wherein the step comprises the following steps: integrating the external real image information and the physiological index information, or integrating the external real image information and the running state information, or integrating the external real image information and the image information to generate a second live broadcast live image;
a step of transmitting fourth information to the AR glasses, including: and sending the second live video to another AR glasses.
15. The method of claim 14,
generating fourth information according to the real-time data and the glasses data, wherein the step comprises the following steps: integrating the external real image information to generate a virtual model, and integrating the virtual model and the physiological index information, or integrating the virtual model and the running state information, or integrating the virtual model and the image information to generate a third live broadcast live image;
a step of transmitting fourth information to the AR glasses, including: and sending the third live video to another AR glasses.
16. The method of claim 15, wherein after the step of sending fourth information to the AR glasses, the method further comprises:
receiving the interactive information and the virtual model of one AR glasses, fusing the interactive information and the virtual model to generate fifth information, and sending the fifth information to the other AR glasses.
17. The method of claim 10, wherein when the real-time data includes a plurality of images,
the step of generating third information according to the real-time data comprises the steps of composing and splicing a plurality of images to generate a real-time 3D portrait;
the step of sending the third information to the AR glasses includes sending the real-time 3D portrait to the at least two AR glasses.
18. The method of claim 17, wherein the step of composing and stitching the plurality of images to generate a real-time 3D portrait comprises:
and analyzing the change condition of the images according to the plurality of images, determining a preset special effect, and combining the preset special effect with the real-time 3D portrait to generate image information.
19. The AR glasses remote interaction method is applied to AR glasses, and the AR glasses are in communication connection with a cloud server through a network; the method comprises the following steps:
the AR glasses send first information, and the first information is sent to the cloud server;
the AR glasses receive second information and receive the second information sent by the cloud server;
the first information comprises a service request, life flow information, real world environment information, virtual portrait, user information, 3D model information, updating information, game information, video information and interaction information;
the second information comprises life service information, game information, video information, third visual angle enhancement information and virtual portrait social information;
the interaction information comprises: visual interaction data, handle control interaction data, position interaction data, gesture interaction data and voice interaction data;
the life stream information comprises position information or image information;
when the cloud server is connected to at least two AR glasses,
one of the AR glasses sends interaction information to the cloud server, and the other AR glasses sends real-world environment information of one of the AR glasses to the cloud server;
and one of the AR glasses receives third visual angle enhancement information and receives the third visual angle enhancement information sent by the cloud server.
20. The method of claim 19, wherein the method comprises:
storing the game data to the local before sending the first information;
the step of the AR glasses transmitting the first information includes: sending game updating information to the cloud server;
the step of the AR glasses receiving the second information includes: and receiving the updating data corresponding to the game updating information so as to update the game data stored in the local area through the updating data.
21. The method of claim 20, further comprising:
before the first information is sent, acquiring interaction information of a user through a gesture recognition module, and fusing and processing the interaction information and the game data;
the step of sending the first information by the AR glasses comprises the step of sending the data subjected to fusion processing to the cloud server so that the cloud server can calculate, synchronize or store the data subjected to fusion processing.
22. The method of claim 20, wherein after the step of receiving the second information, the method further comprises: and playing the received second information.
23. The method of claim 20, wherein after the step of receiving the second information, the method further comprises: and fixing the 3D model information and/or the game data and/or the video information and/or the virtual portrait social information and/or the running state information acquired from different external devices in a physical space of the AR glasses through a space positioning algorithm.
24. The method of claim 19, further comprising: and generating a virtual image, and sending the virtual image to the cloud server for sending the virtual image to another AR glasses.
25. The AR glasses remote interaction device is applied to a cloud server, and the cloud server is in communication connection with the AR glasses through a network; the device comprises:
a first receiving module, configured to receive first information, where the first information is sent by the AR glasses;
the first generating module is used for generating second information according to the first information;
a first sending module, configured to send the second information to the AR glasses;
the first information comprises at least one of a service request, life flow information, real world environment information, virtual portrait, user information, 3D model information, update information, game information, video information or interaction information;
the second information comprises at least one of life service information, game information, video information, third visual angle enhancement information or virtual portrait social information;
the interaction information comprises: at least one of visual interaction data, handle control interaction data, position interaction data, gesture interaction data, or voice interaction data;
the vital flow information includes at least one of location information or image information;
when the cloud server is connected with at least two AR glasses, the device further comprises:
the cloud server receives first information, wherein the first information comprises interaction information sent by one AR glasses, and the interaction information comprises real-world environment information of one AR glasses collected by the other AR glasses;
the second generating module is used for generating second information by the cloud server according to the first information, and generating third visual angle enhancement information according to the interaction information and the real world environment information;
and the second sending module is used for sending the second information to the AR glasses by the cloud server, and the step of sending the third visual angle enhancement information to one of the AR glasses comprises the step of sending the third visual angle enhancement information to one of the AR glasses.
26. A computer-readable medium having non-volatile program code executable by a processor, wherein the program code causes the processor to perform the method of any of claims 1-18.
27. The AR glasses remote interaction device is applied to AR glasses, and the AR glasses are in communication connection with a cloud server through a network; the device comprises:
the first sending module is used for sending first information and sending the first information to the cloud server;
the first receiving module is used for receiving second information and receiving the second information sent by the cloud server;
the first information comprises a service request, life flow information, real world environment information, virtual portrait, user information, 3D model information, updating information, game information, video information and interaction information;
the second information comprises life service information, game information, video information, third visual angle enhancement information and virtual portrait social information;
the interaction information comprises: visual interaction data, handle control interaction data, position interaction data, gesture interaction data and voice interaction data;
the life stream information comprises position information or image information;
when the cloud server is connected with at least two AR glasses, the device further comprises:
the second sending module is used for sending the interaction information to the cloud server by one of the AR glasses, and sending the environment information of the real world of one of the AR glasses to the cloud server by the other AR glasses;
and the second receiving module is used for receiving third visual angle enhancement information by one of the AR glasses and receiving the third visual angle enhancement information sent by the cloud server.
28. A computer-readable medium having non-volatile program code executable by a processor, wherein the program code causes the processor to perform the method of any of claims 19-24.
CN201910239145.XA 2019-03-27 2019-03-27 AR glasses remote interaction method and device and computer readable medium Active CN109976690B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202010318402.1A CN111752511B (en) 2019-03-27 2019-03-27 AR glasses remote interaction method, device and computer readable medium
CN201910239145.XA CN109976690B (en) 2019-03-27 2019-03-27 AR glasses remote interaction method and device and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910239145.XA CN109976690B (en) 2019-03-27 2019-03-27 AR glasses remote interaction method and device and computer readable medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202010318402.1A Division CN111752511B (en) 2019-03-27 2019-03-27 AR glasses remote interaction method, device and computer readable medium

Publications (2)

Publication Number Publication Date
CN109976690A CN109976690A (en) 2019-07-05
CN109976690B true CN109976690B (en) 2020-05-12

Family

ID=67081062

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201910239145.XA Active CN109976690B (en) 2019-03-27 2019-03-27 AR glasses remote interaction method and device and computer readable medium
CN202010318402.1A Active CN111752511B (en) 2019-03-27 2019-03-27 AR glasses remote interaction method, device and computer readable medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202010318402.1A Active CN111752511B (en) 2019-03-27 2019-03-27 AR glasses remote interaction method, device and computer readable medium

Country Status (1)

Country Link
CN (2) CN109976690B (en)

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110312142A (en) * 2019-07-23 2019-10-08 东华大学 A kind of AR live broadcast system based on 5G technology
CN111352243A (en) * 2020-03-31 2020-06-30 北京塞傲时代信息技术有限公司 AR remote rendering system and method based on 5G network
CN111443814B (en) * 2020-04-09 2023-05-05 深圳市瑞云科技有限公司 AR (augmented reality) glasses system based on cloud rendering
CN113705278A (en) * 2020-05-20 2021-11-26 深圳市看见智能科技有限公司 Human behavior recognition system and method for intelligent glasses
CN111638785A (en) * 2020-05-29 2020-09-08 熙迈(上海)检测技术服务有限公司 Real-time interaction method of AR module
CN111765890B (en) * 2020-06-28 2023-08-15 山东浪潮科学研究院有限公司 Navigation method of indoor navigation system based on cloud image recognition and AR
CN111938608A (en) * 2020-09-15 2020-11-17 安徽理工大学 AR (augmented reality) glasses, monitoring system and monitoring method for intelligent monitoring of old people
CN112306240A (en) * 2020-10-29 2021-02-02 中国移动通信集团黑龙江有限公司 Virtual reality data processing method, device, equipment and storage medium
CN112968787A (en) * 2021-02-01 2021-06-15 杭州光粒科技有限公司 Conference control method, AR (augmented reality) equipment and conference system
CN113010009B (en) * 2021-02-08 2022-07-22 北京蜂巢世纪科技有限公司 Object sharing method and device
CN112992381A (en) * 2021-03-10 2021-06-18 张志宏 MR (magnetic resonance) collaborative sharing communication system for surgical diagnosis and treatment
CN113566829A (en) * 2021-07-19 2021-10-29 上海极赫信息技术有限公司 High-precision positioning technology-based mixed reality navigation method and system and MR (magnetic resonance) equipment
CN113724398A (en) * 2021-09-01 2021-11-30 北京百度网讯科技有限公司 Augmented reality method, apparatus, device and storage medium
CN114063771A (en) * 2021-10-14 2022-02-18 内蒙古雲图计算机软件开发有限公司 Park scene interactive display system based on AR technology
CN114038260A (en) * 2021-10-28 2022-02-11 徐州嘉恒智能科技有限公司 AR glasses-based coal mine safety operation training method
CN114268784A (en) * 2021-12-31 2022-04-01 东莞仲天电子科技有限公司 Method for improving near-to-eye display experience effect of AR (augmented reality) equipment
CN114584610A (en) * 2022-01-20 2022-06-03 北京释限创新科技有限公司 AR glasses message push service method and system, server and storage medium
CN117994472A (en) * 2022-03-18 2024-05-07 郑州泽正技术服务有限公司 Method and system for carrying out real social contact by utilizing virtual scene and AR glasses
CN118092658B (en) * 2024-03-11 2024-10-11 中国长江电力股份有限公司 AR-based interactive auxiliary maintenance platform and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791699A (en) * 2017-01-18 2017-05-31 北京爱情说科技有限公司 One kind remotely wears interactive video shared system
CN107315919A (en) * 2017-07-18 2017-11-03 苏州国科康成医疗科技有限公司 Tele-medicine playback system
CN109086726A (en) * 2018-08-10 2018-12-25 陈涛 A kind of topography's recognition methods and system based on AR intelligent glasses

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106997235B (en) * 2016-01-25 2018-07-13 亮风台(上海)信息科技有限公司 For realizing method, the equipment of augmented reality interaction and displaying
CN106354248A (en) * 2016-05-16 2017-01-25 刘瑞雪 Augmented reality system and interactive system for obtaining user information and the method thereof
CN108427498A (en) * 2017-02-14 2018-08-21 深圳梦境视觉智能科技有限公司 A kind of exchange method and device based on augmented reality
CN107678538A (en) * 2017-09-05 2018-02-09 北京原力创新科技有限公司 Augmented reality system and information processing method therein, storage medium, processor
WO2019055679A1 (en) * 2017-09-13 2019-03-21 Lahood Edward Rashid Method, apparatus and computer-readable media for displaying augmented reality information
CN108983974B (en) * 2018-07-03 2020-06-30 百度在线网络技术(北京)有限公司 AR scene processing method, device, equipment and computer-readable storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106791699A (en) * 2017-01-18 2017-05-31 北京爱情说科技有限公司 One kind remotely wears interactive video shared system
CN107315919A (en) * 2017-07-18 2017-11-03 苏州国科康成医疗科技有限公司 Tele-medicine playback system
CN109086726A (en) * 2018-08-10 2018-12-25 陈涛 A kind of topography's recognition methods and system based on AR intelligent glasses

Also Published As

Publication number Publication date
CN109976690A (en) 2019-07-05
CN111752511A (en) 2020-10-09
CN111752511B (en) 2024-08-27

Similar Documents

Publication Publication Date Title
CN109976690B (en) AR glasses remote interaction method and device and computer readable medium
US10593025B2 (en) Method and system for reconstructing obstructed face portions for virtual reality environment
KR102362001B1 (en) Method and system for providing eye tracking based information about a user behavior, client device, server and computer program product
US9746912B2 (en) Transformations for virtual guest representation
US9551873B2 (en) Head mounted device (HMD) system having interface with mobile computing device for rendering virtual reality content
US9020203B2 (en) System and method for managing spatiotemporal uncertainty
EP3691280B1 (en) Video transmission method, server, vr playback terminal and computer-readable storage medium
CN109445103B (en) Display picture updating method and device, storage medium and electronic device
EP3888764A1 (en) Filtering and parental control methods for restricting visual activity on a head mounted display
US20080079752A1 (en) Virtual entertainment
CN111862348B (en) Video display method, video generation method, device, equipment and storage medium
CN108983974B (en) AR scene processing method, device, equipment and computer-readable storage medium
US20180169517A1 (en) Reactive animation for virtual reality
Viola et al. VR2Gather: A collaborative, social virtual reality system for adaptive, multiparty real-time communication
CN105739703A (en) Virtual reality somatosensory interaction system and method for wireless head-mounted display equipment
CN111768478A (en) Image synthesis method and device, storage medium and electronic equipment
CN113965773A (en) Live broadcast display method and device, storage medium and electronic equipment
CN116017014A (en) Video processing method, device, electronic equipment and storage medium
CN114915735A (en) Video data processing method
KR102149608B1 (en) Method for providing mixed rendering content using virtual reality and augmented reality and system using the same
CN113946221A (en) Eye driving control method and device, storage medium and electronic equipment
CN109474819B (en) Image presenting method and device
JP7344084B2 (en) Content distribution system, content distribution method, and content distribution program
Ryskeldiev et al. ReactSpace: Spatial-Aware User Interactions for Collocated Social Live Streaming Experiences
CN113179435B (en) Television remote karaoke method and system based on VR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant