[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113873313B - Virtual reality picture sharing method and device - Google Patents

Virtual reality picture sharing method and device Download PDF

Info

Publication number
CN113873313B
CN113873313B CN202111107630.5A CN202111107630A CN113873313B CN 113873313 B CN113873313 B CN 113873313B CN 202111107630 A CN202111107630 A CN 202111107630A CN 113873313 B CN113873313 B CN 113873313B
Authority
CN
China
Prior art keywords
receiving end
end device
panoramic image
gesture
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111107630.5A
Other languages
Chinese (zh)
Other versions
CN113873313A (en
Inventor
彭兴发
许孜奕
陈朝阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lexiang Technology Co ltd
Original Assignee
Lexiang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lexiang Technology Co ltd filed Critical Lexiang Technology Co ltd
Priority to CN202111107630.5A priority Critical patent/CN113873313B/en
Publication of CN113873313A publication Critical patent/CN113873313A/en
Application granted granted Critical
Publication of CN113873313B publication Critical patent/CN113873313B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Graphics (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the application provides a virtual reality picture sharing method and device, which are used for improving immersion of a user of receiving end equipment under the condition that the postures of the receiving end equipment and transmitting end equipment are inconsistent. The method comprises the following steps: receiving data of a first partial image from a transmitting end device and a posture and a view angle of the transmitting end device corresponding to the first partial image; determining the direction and the size of the first partial image in the panoramic image of the receiving end equipment according to the gesture and the view angle of the sending end equipment; rendering the first partial image onto the panoramic image of the receiving end device according to the direction and the size of the first partial image in the panoramic image of the receiving end device; and displaying the panoramic image of the receiving end equipment in the receiving end equipment.

Description

Virtual reality picture sharing method and device
Technical Field
The application relates to the technical field of virtual reality, in particular to a virtual reality picture sharing method and device.
Background
Virtual Reality (VR) technology provides an interactive immersive virtual three-dimensional environment for a user by using a computer simulation system, and provides a sense of immersion for the user in the real world by simulating senses such as vision, hearing, touch and the like.
Because the virtual reality technology has the advantages of immersion, interactivity, imagination and the like, the virtual reality technology is widely applied to various industries, wherein one person operates and shows in the industries of education, training, travel and the like, and one or more persons share the visual field of a presenter and watch the interactive scene operated by the presenter. In the prior art, when the observation directions of the sending end equipment and the receiving end equipment are inconsistent, content deletion can occur in the visible range of the receiving end equipment, so that the immersion of the user of the receiving end equipment is reduced.
Disclosure of Invention
The embodiment of the application provides a virtual reality picture sharing method and device, which are used for improving immersion of a user of receiving end equipment under the condition that the postures of the receiving end equipment and transmitting end equipment are inconsistent.
In a first aspect, an embodiment of the present application provides a method for sharing a virtual reality frame, where the method is applied to a receiving end device, for example, a receiving end virtual reality device.
The method comprises the following steps: receiving data of a first partial image from a transmitting end device and a posture and a view angle of the transmitting end device corresponding to the first partial image; determining the direction and the size of the first partial image in the panoramic image of the receiving end equipment according to the gesture and the view angle of the sending end equipment; rendering the first partial image onto the panoramic image of the receiving end device according to the direction and the size of the first partial image in the panoramic image of the receiving end device; and displaying the panoramic image of the receiving end equipment in the receiving end equipment.
According to the technical scheme, the receiving end equipment renders the local image to the panoramic image according to the received local image of the sending end equipment and the corresponding gesture and view angle, and the panoramic image is displayed in the receiving end equipment, so that the receiving end equipment can display the latest local image along with the gesture change of the sending end equipment, and the immersion of a user of the receiving end equipment is improved.
In one possible design, the method further includes, before displaying the first partial image, according to a direction and a size of the first partial image in the panoramic image of the receiving end device: preprocessing the current panoramic image of the receiving end equipment; the pretreatment comprises the following steps: and fading or darkening or blurring the original panoramic image.
According to the technical scheme, the old panoramic image is preprocessed, and the old panoramic image is thinned or dimmed or blurred so as to inform the receiving end device user that the effectiveness of the content is gradually reduced along with the progress of time, so that the immersion of the receiving end device user is improved, and the receiving end device user has correct cognition on the three-dimensional scene.
In one possible design, the method further comprises: acquiring the gesture of the receiving end equipment; and if the deviation between the gesture of the receiving end device and the gesture of the sending end device is greater than or equal to a set threshold value, displaying a direction indication mark, wherein the direction indication mark is used for indicating the receiving end device to adjust the gesture.
According to the technical scheme, when the gesture of the receiving end device and the gesture of the sending end device are greatly different, the receiving end device can display the direction indication mark, so that the receiving end device senses the change of the gesture of the sending end device, and guides the receiving end device to adjust the gesture of the receiving end device to be close to the gesture of the sending end device, and therefore a user of the receiving end device can conveniently and quickly pay attention to the content which the user of the sending end device wants to teach.
In one possible design, after the displaying the direction indicator, the method further includes: and when detecting that the deviation between the gesture of the receiving end device and the gesture of the sending end device is smaller than the set threshold value, clearing the direction indication mark.
In one possible design, the method further comprises: and if the picture indication mark is displayed, the bit picture indication mark is used for indicating the area where the first local picture is located.
In one possible design, the visual indication mark is a rectangular frame with a reminder color.
According to the technical scheme, when the gesture of the receiving end device is close to the gesture of the sending end device, the direction indication mark can be cleared, and the picture indication mark is displayed so as to remind a user of the receiving end device of the latest displayed content. Therefore, the user of the receiving end equipment can sense the picture displayed by the sending end equipment in real time, and meanwhile, the content which the user of the sending end equipment wants to express is highlighted.
In one possible design, the method further comprises: creating a panoramic image carrier, wherein the panoramic image carrier is used for drawing a panoramic image of the receiving end equipment; the rendering the first partial image onto the panoramic image of the receiving end device includes: drawing the first partial image at a corresponding position of the panoramic image carrier; and updating and displaying the panoramic image.
In a second aspect, an embodiment of the present application provides a virtual reality image sharing device, including: the receiving and transmitting module is used for receiving data of a first partial image from the transmitting end equipment and the gesture and the view angle of the transmitting end equipment corresponding to the first partial image; the processing module is used for determining the direction and the size of the first partial image in the panoramic image of the receiving end equipment according to the gesture and the view angle of the sending end equipment; a display module, configured to render the first partial image onto the panoramic image of the receiving end device according to the direction and the size of the first partial image in the panoramic image of the receiving end device; and displaying the panoramic image of the receiving end equipment in the receiving end equipment.
In one possible design, the processing module is further configured to: preprocessing the current panoramic image of the receiving end equipment; the pretreatment comprises the following steps: and fading or darkening or blurring the original panoramic image.
In one possible design, the transceiver module is further configured to: acquiring the gesture of the receiving end equipment; the display module is further configured to: and if the deviation between the gesture of the receiving end device and the gesture of the sending end device is greater than or equal to a set threshold value, displaying a direction indication mark, wherein the direction indication mark is used for indicating the receiving end device to adjust the gesture.
In one possible design, the display module is further configured to: and when detecting that the deviation between the gesture of the receiving end device and the gesture of the sending end device is smaller than the set threshold value, clearing the direction indication mark.
In one possible design, the display module is further configured to: and if the picture indication mark is displayed, the picture indication mark is used for indicating the area where the first local picture is located.
In one possible design, the visual indication marks are shown as rectangular boxes with a reminder color.
In one possible design, the processing module is further configured to: creating a panoramic image carrier, wherein the panoramic image carrier is used for drawing a panoramic image of the receiving end equipment; the display module is further configured to: drawing the first partial image at a corresponding position of the panoramic image carrier; and updating and displaying the panoramic image.
In a third aspect, embodiments of the present application further provide a computing device, including:
a memory for storing program instructions;
a processor for invoking program instructions stored in said memory and performing the method as described in the various possible designs of the first aspect according to the obtained program instructions.
In a fourth aspect, embodiments of the present application also provide a computer-readable storage medium comprising computer-readable instructions which, when read and executed by a computer, cause the computer to perform the method as described in the various possible designs of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a virtual reality picture sharing system according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a transmitting end device or a receiving end device provided in an embodiment of the present application;
fig. 3 is a flow chart of a virtual reality picture sharing method according to an embodiment of the present application;
fig. 4 is a specific example of a virtual reality standby screen sharing method provided in an embodiment of the present application;
fig. 5 is another specific example of a virtual reality standby screen sharing method provided in an embodiment of the present application;
fig. 6 is a schematic process flow diagram of a transmitting end device according to an embodiment of the present application;
fig. 7 is a schematic process flow diagram of a receiving end device according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of a virtual reality standby screen sharing device according to an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, wherein it is apparent that the described embodiments are only some, but not all, of the embodiments of the present application. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Fig. 1 schematically illustrates a virtual reality picture sharing system, which is applicable to the embodiments of the present application, and includes a transmitting end device 110, a receiving end device 120, and optionally, a data transmission device 130. The transmitting device 110 is configured to transmit, to the receiving device 120, local image data (a part of the panoramic image) of the transmitting device 110, and data such as a pose and a view angle corresponding to the local image, for example, through the data transmission device 130. The receiving end device 120 is configured to receive the local image data of the transmitting end device 110 and the corresponding pose and view angle thereof, and draw the local image according to the pose and view angle.
The transmitting device 110 may be a VR device, or may be a device with a camera function, such as a computer, a smart phone, or an unmanned aerial vehicle, or may be an augmented reality (augmented reality, AR) device, a Mixed Reality (MR) device, or a device for a running 3D game, which is not specifically limited in this application.
The transmitting end equipment has the following characteristics:
1. a partial image is generated, not a complete panoramic image.
2. The gesture corresponding to the local image can be acquired and sent to the receiving end equipment.
3. The pose is not fixed, but may change over time, and other partial images of the panoramic image (corresponding to the changed pose) can be generated as the pose changes.
The receiving end device 120 may be a virtual reality device, or may be a device with a graphic image drawing function, such as a computer, a smart phone, or an unmanned aerial vehicle, or an AR device or an MR device, which is not specifically limited in this application.
Either the transmitting-end device 110 or the receiving-end device 120 or the data transmission device 130 may have a structure as shown in fig. 2, which includes a processor 210, a communication interface 220, and a memory 230.
The communication interface 220 is used for communicating with other devices, receiving and transmitting information transmitted by the other devices, and realizing communication.
The processor 210 is a control center of the transmitting-end device 110 or the receiving-end device 120 or the data transmission device 130, connects the entire transmitting-end device 110 or the receiving-end device 120 or the respective portions of the data transmission device 130 using various interfaces and routes, performs various functions of the transmitting-end device 110 or the receiving-end device 120 or the data transmission device 130 and processes data by running or executing software programs or modules stored in the memory 230, and calling the data stored in the memory 230. Optionally, the processor 210 may include one or more processing units.
The memory 230 may be used to store software programs and modules that the processor 210 executes to perform various functional applications and data processing by executing the software programs and modules stored in the memory 230. The memory 230 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, application programs required for at least one function, and the like; the storage data area may store data created according to business processes, etc. In addition, memory 230 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The above-described example is merely an example of a virtual reality screen display system, and the embodiments of the present application are not limited thereto.
Fig. 3 exemplarily illustrates a virtual reality picture sharing method provided in an embodiment of the present application, where the method may be executed by a receiving end device. As shown in fig. 3, the method includes:
step 301, the receiving end device receives data of a first partial image from the transmitting end device and a posture and a view angle of the transmitting end device corresponding to the first partial image.
The gesture includes a rotation angle and a position, namely a three-dimensional angle (three degrees of freedom rotating around three rectangular coordinate axes of x, y and z) and a three-dimensional coordinate (three degrees of freedom moving along the directions of the three rectangular coordinate axes of x, y and z), and when the gesture of the sending end changes, the local image currently displayed by the sending end also changes correspondingly with the gesture of the sending end. The angle of view refers to the angle between the lines from the two edges of the visual picture to the observation point, and includes two values, i.e., horizontal and vertical, for determining the width and height of the image, i.e., the size of the visual picture, and the angle of view generally does not change.
Before executing the steps, a sender device user starts the sender device, creates a three-dimensional scene, and sends data of a first partial image of the sender device and the gesture and the view angle of the sender device corresponding to the first partial image to the receiver device through the data transmission device.
After the user of the receiving device starts the application, a panoramic image carrier may be created first, where the panoramic image carrier is used to render the panoramic image of the receiving device, and may be understood as a carrier or object that presents the panoramic image. For example, the panoramic image carrier may be a cube map (cube map), sphere map (sphereap), sky box (skybox), or the like. Because each frame of the partial image sent by the sending end device is changed, and the receiving end device cannot predict the content of the partial image, the receiving end device needs to render the received partial image onto the panoramic image carrier so as to store the content for subsequent display and processing by the receiving end device. The partial image received by the receiving end device is different from the panoramic image in visibility to the user of the receiving end device, and the user of the receiving end device cannot perceive the partial image received by the receiving end device, namely the partial image received by the receiving end device is transparent to the user of the receiving end device, and the user of the receiving end device sees the visible part of the panoramic image.
The gesture of the sender device may be the same as or different from the gesture of the sender device user. For example, when the user wears a virtual reality device such as VR glasses on his head, the gesture of the transmitting device is the gesture of the user of the transmitting device; when a user uses VR functions in a smartphone, the gesture of the sender device may be different from the gesture of the sender device user. Likewise, the pose of the receiving-end device may be the same as or different from the pose of the user of the receiving-end device. According to the embodiment of the application, sharing and displaying of the virtual reality picture are executed according to the gesture of the sending end equipment and the gesture of the receiving end equipment.
Step 302, the receiving end device determines the direction and the size of the first partial image in the panoramic image of the receiving end device according to the gesture and the view angle of the sending end device.
After the panoramic image carrier is created, the receiving end device may determine, according to the pose and the view angle of the transmitting end device, a direction and a size of the first partial image corresponding to the panoramic image of the receiving end device. Wherein the pose is used to determine the direction of the first partial image in the panoramic image and the field angle is used to determine the size of the first partial image in the panoramic image.
Step 303, optionally, preprocessing the current panoramic image of the receiving end device, where the preprocessing includes: and (5) fading or darkening or blurring the original panoramic image.
Considering the real-time change of the gesture of the sending end device, the panoramic image rendering of the receiving end device may make the receiving end device user unable to perceive the change of the gesture of the sending end device, so as to weaken the content which the sending end device user wants to express and speak. Moreover, since only the partial image corresponding to the posture of the transmitting end device in the panoramic image of the receiving end device is changed, the images except the partial image are not changed, and when the postures of the transmitting end device and the receiving end device are inconsistent, a user of the receiving end device may have error cognition on the real three-dimensional world. Therefore, the panoramic image may be preprocessed before rendering a new partial image, for example, the original panoramic image may be faded or dimmed or blurred. The preprocessing may be, but is not limited to, color mixing (alphablend) or gaussian blur. For example, a black translucent mask is overlaid on the panoramic image to darken the original panoramic image using a color blend, or repeated gaussian blur plus darkening. Therefore, the receiving end equipment is informed that the user advances along with the time, the effectiveness of the content in front of the eyes is gradually reduced, and the sending end equipment updates the content corresponding to other gestures. If there is no local image update at this location, the panoramic image will be darker until it appears black after repeated preprocessing such as darkening for a period of time, giving the receiving end device user a sense that the local image slowly disappears.
Step 304, the receiving end device renders the first partial image onto the panoramic image of the receiving end device according to the direction and the size of the first partial image in the panoramic image of the receiving end device.
Step 305, displaying the panoramic image of the receiving end device in the receiving end device.
Specifically, after preprocessing an original panoramic image, the receiving end device draws the first partial image onto the panoramic image according to the direction and the size of the first partial image in the panoramic image of the receiving end device. When the first partial image is drawn, the receiving end device can use the panoramic image carrier as a shader (loader) rendering target, empty the depth and the template buffer area of the rendering target during rendering, not empty the color buffer area of the rendering target, draw the first partial image as a shader (loader) texture resource (texture) to the position corresponding to the panoramic image, so as to reserve other partial image data, and update the panoramic image.
In one possible implementation manner, to help the user of the receiving end device to quickly find the gesture corresponding to the sending end, the receiving end device may obtain the gesture of the receiving end device, and if the deviation between the gesture of the receiving end device and the gesture of the sending end device is greater than or equal to a set threshold, a direction indicator is displayed on the receiving end device, where the direction indicator is used to instruct the receiving end device to adjust the gesture.
After the receiving end device user adjusts the gesture according to the direction indication mark, when the deviation between the gesture of the receiving end device and the gesture of the sending end device is smaller than a set threshold value, the direction indication mark is cleared. The setting of the threshold here may be empirically set.
Further, after the direction indicator is cleared, the receiving end device may display a screen indicator for indicating an area where the updated first partial screen is located for a certain period of time. The screen indication mark can be, for example, a rectangular frame with a reminding color, and the updated first partial screen is highlighted by the rectangular frame with the reminding color, so that the content which the user of the sending end equipment wants to express can be emphasized.
The comparison between the gesture of the sending end device and the gesture of the receiving end device is to display a direction indication mark, the direction indication mark and the panoramic image are two independent objects, and the direction indication mark and the panoramic image are processed in the same frame (the same cycle) without influencing the result, so that whether the deviation between the current gesture of the receiving end device and the gesture of the sending end device reaches a threshold value can be judged after the gesture of the sending end device is received, or whether the deviation between the current gesture of the receiving end device and the gesture of the sending end device reaches the threshold value can be judged after the received partial image is rendered on the panoramic image carrier.
In order to more clearly understand the embodiments of the present application, a detailed description of a process of sharing a virtual reality picture in the technical solution of the present application is described below in conjunction with specific examples in fig. 4 and 5. The application scenario of this example is that the sender device user introduces a three-dimensional scenario about the animal to the receiver device user.
Fig. 4[101] is a three-dimensional scene generated by the transmitting end, the three-dimensional scene is an elephant directly in front of the transmitting end, a giraffe is arranged on the left side, a dan-top crane is arranged on the right side, at the moment, the user of the transmitting end faces the front, the three-dimensional angle is (0, 0), and the three-dimensional coordinate is (0, 0). The user of the transmitting end device starts to introduce the immediately preceding object, the transmitting end device generates an object image as in [102] of fig. 4 through the graphic engine, and transmits the image data, and data such as the posture and the angle of view of the transmitting end device, to the receiving end device through the data transmission module.
The user of the receiving device starts the receiving device or application, which first generates a panoramic image carrier, fig. 4[302], which is a hexahedral map, with the default black indicating no content. Next, the receiving end device receives the image data from the transmitting end device and the pose and angle of view of the transmitting end device, and determines a corresponding direction and size of the received image in the panoramic image of the receiving end device according to the pose and angle of view of the transmitting end device, takes the panoramic image as a rendering target (render target), does not empty a color buffer (color buffer), draws the received first partial image as a 2D texture2D resource onto the panoramic image (render to cubemap), and updates the receiving end panoramic image as in fig. 4[303]. When the receiver-side device user is in the same posture as the transmitter-side device user, an elephant appears immediately in front of the receiver-side device user, as shown in fig. 4[304].
When the sender device user turns left as in fig. 5[101], the three-dimensional angle is (0, -90,0), the three-dimensional coordinates are (0, 0), and the sender device user begins to introduce giraffes. At this time, the transmitting-end apparatus generates a giraffe image as in [102] of fig. 5 through the graphic engine, and transmits the image data, and the posture data of the transmitting-end apparatus, to the receiving-end apparatus through the data transmission apparatus. The receiving end device receives the image of the transmitting end device as shown in fig. 5[301], and the new gesture data of the transmitting end device, firstly draws a black image mask, pre-processes the old panoramic image (fig. 5[302 ]) as shown in fig. 5[303], then draws the new partial image picture to the (0, -90,0) position of the panoramic image carrier, and updates the panoramic image as shown in fig. 5[304].
Because the receiving end device user is free to move, there are several different situations:
1. the receiver-side device user turns left with the transmitter-side device user, and the giraffe appears right in front of the receiver-side device user, as shown in fig. 5[305].
2. The receiver-side device user does not turn with the transmitter-side device user, the receiver-side device remains in its original posture, and the right front of the receiver-side device user is still an elephant, as in fig. 5[306]. At this time, the sender device user is explaining the giraffe, and the receiver device user is facing the elephant, so the receiver device user cannot understand what the sender device user is explaining. Meanwhile, since only the content in the direction facing the transmitting end device is updated, the content in other directions is not updated. The user of the receiving device sees a static elephant, which may be mistaken for a static elephant, or a stuck application. When the elephant leaves the visible range of the transmitting end and even the whole scene is switched, if the transmitting end equipment does not face to the right front, the user of the receiving end equipment still can see the elephant in the right front, and at the moment, the user of the receiving end equipment can generate error cognition on the scene content. To solve this problem, when rendering the received partial image, a black translucent image mask is first covered on the panoramic image, the original panoramic image is darkened by using color mixture, or the original panoramic image is darkened by repeating the way of gaussian blur and darkening, and then the partial image picture of the transmitting end is rendered as shown in fig. 5[307] until it disappears as shown in fig. 5[308], in this way, the receiving end device user is informed that the content effectiveness in front of eyes is gradually reduced with time, and the transmitting end device user explains the updated content of other viewing angles at the moment.
3. If the receiving end device user is not informed that the receiving end device user should turn left, or the receiving end device user is not informed or heard at any time, the receiving end device user does not know the gesture of the sending end device user at the moment, and can only look around to find, the receiving end device user cannot keep the same gesture with the sending end device user in the first time, and some content is missed. Thus, when the inconsistency of the posture of the user of the receiving end device and the posture of the user of the transmitting end device reaches a set threshold, a posture direction indicator of the transmitting end is provided for informing the user of the receiving end device that the user should turn in the direction indicated by the direction indicator, as shown in fig. 5[309], fig. 5[310]. After the receiving end device user adjusts the gesture according to the direction indication mark, if the difference value of the gesture between the receiving end device user and the sending end device user is smaller than a preset threshold value, the direction indication mark disappears. Then, a picture indication mark (such as a colored rectangular frame) for displaying the picture of the current transmitting end is provided in a certain time (such as 1 second), and the part of the content is identified as the current latest content. Thus, the receiving end user can easily sense the current posture of the transmitting end user and the real-time change of the posture, as shown in fig. 5[311], fig. 5[312].
In order to more clearly understand the embodiments of the present application, a specific flow of the transmitting end in the technical solution of the present application is described in detail below with reference to fig. 6.
Step 601, the transmitting end device creates a three-dimensional scene.
Step 602, acquiring data such as the gesture, the angle of view and the like of the current transmitting end equipment.
Step 603, rendering a local image in the current three-dimensional world visible range.
Step 604, transmitting the generated partial image and data such as the pose, the angle of view, and the like.
Step 605, whether to exit the application is determined, if yes, the application is ended, and if not, step 602 is executed.
In order to more clearly understand the embodiments of the present application, a specific process of the receiving end in the technical solution of the present application is described in detail below with reference to fig. 7.
Step 701, creating a panoramic image carrier.
Step 702, acquiring the current gesture of the receiving end device.
Step 703, receiving data such as a local image, a gesture, and a view angle of the transmitting end device.
Step 704, calculating the direction and the size of the received partial image.
Step 705, preprocessing the panoramic image.
Step 706, rendering the received partial image onto a panoramic image carrier.
Step 707, judging whether the deviation between the current receiving end gesture and the sending end gesture reaches a threshold value.
Step 708, a sender direction indicator is displayed.
Step 709, rendering the scene.
Step 710, determining whether to exit the application, if yes, ending, otherwise, executing step 702.
In the embodiment of the application, the receiving end equipment pre-processes the old panoramic image before rendering the new local image, so that the image of the receiving end panoramic image outside the visible range of the sending end is darkened and resolved gradually, thereby informing the receiving end equipment user that the content effectiveness is reduced gradually along with the time, and improving the immersion of the receiving end equipment user; when the postures of the receiving end equipment and the sending end equipment are inconsistent and exceed a set threshold, a direction indication mark for indicating the postures of the receiving end equipment is displayed, so that a user of the receiving end equipment is helped to quickly find the postures of the sending end equipment; after the gesture of the receiving end device user is adjusted according to the direction indication mark, when the inconsistent gesture of the receiving end device is smaller than the set threshold value, the picture indication mark for indicating the current latest picture area is displayed in a certain time, and the content which the receiving end device user wants to express is highlighted.
Based on the same concept, fig. 8 exemplarily illustrates a virtual reality picture sharing apparatus provided in an embodiment of the present application, which is configured to implement the virtual reality picture sharing method in the foregoing embodiment. The apparatus may be a receiving end device, such as a receiving end virtual reality device.
As shown in fig. 8, the apparatus 800 includes:
a transceiver module 801, configured to receive data of a first partial image from a transmitting end device and a posture and a view angle of the transmitting end device corresponding to the first partial image;
a processing module 802, configured to determine, according to the pose and the angle of view of the transmitting end device, a direction and a size of the first partial image in the panoramic image of the receiving end device;
a display module 803, configured to render the first partial image onto the panoramic image of the receiving end device according to the direction and the size of the first partial image in the panoramic image of the receiving end device; and displaying the panoramic image of the receiving end equipment in the receiving end equipment.
In one possible design, the processing module 802 is further specifically configured to: preprocessing the current panoramic image of the receiving end equipment; the pretreatment comprises the following steps: and fading or darkening or blurring the original panoramic image.
In one possible design, the transceiver module 801 is further specifically configured to: acquiring the gesture of the receiving end equipment; the display module 803 is further specifically configured to: and if the deviation between the gesture of the receiving end device and the gesture of the sending end device is greater than or equal to a set threshold value, displaying a direction indication mark, wherein the direction indication mark is used for indicating the receiving end device to adjust the gesture.
In one possible design, the display module 803 is further specifically configured to: and when detecting that the deviation between the gesture of the receiving end device and the gesture of the sending end device is smaller than the set threshold value, clearing the direction indication mark.
In one possible design, the display module 803 is further specifically configured to: and displaying a picture indication mark, wherein the picture indication mark is used for indicating the area where the first local picture is located.
In one possible design, the processing module is further specifically configured to: creating a panoramic image carrier, wherein the panoramic image carrier is used for drawing a panoramic image of the receiving end equipment; the display module 803 is further specifically configured to: drawing the first partial image at a corresponding position of the panoramic image carrier; and updating and displaying the panoramic image.
Based on the same conception, embodiments of the present application provide a computing device including:
a memory for storing program instructions;
and the processor is used for calling the program instructions stored in the memory and executing the virtual reality equipment picture sharing method according to the obtained program instructions.
Based on the same conception, the embodiments of the present application provide a computer-readable storage medium, which includes computer-readable instructions that, when read and executed by a computer, cause the computer to execute the above-described virtual reality standby screen sharing method.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present application have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. It is therefore intended that the following claims be interpreted as including the preferred embodiments and all such alterations and modifications as fall within the scope of the application.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (9)

1. A virtual reality picture sharing method, wherein the method is applied to a receiving end device, the method comprising:
receiving data of a first partial image from a transmitting end device and a posture and a view angle of the transmitting end device corresponding to the first partial image;
determining the direction and the size of the first partial image in the panoramic image of the receiving end equipment according to the gesture and the view angle of the sending end equipment;
rendering the first partial image onto the panoramic image of the receiving end device according to the direction and the size of the first partial image in the panoramic image of the receiving end device;
preprocessing the current panoramic image of the receiving end equipment; the pretreatment comprises the following steps: fading or darkening or blurring the original panoramic image;
and displaying the current panoramic image of the receiving end equipment in the receiving end equipment.
2. The method according to claim 1, wherein the method further comprises:
acquiring the gesture of the receiving end equipment;
and if the deviation between the gesture of the receiving end device and the gesture of the sending end device is greater than or equal to a set threshold value, displaying a direction indication mark, wherein the direction indication mark is used for indicating the receiving end device to adjust the gesture.
3. The method of claim 2, wherein after displaying the direction indicator, the method further comprises:
and when detecting that the deviation between the gesture of the receiving end device and the gesture of the sending end device is smaller than the set threshold value, clearing the direction indication mark.
4. A method according to claim 2 or 3, characterized in that the method further comprises:
and displaying a picture indication mark, wherein the picture indication mark is used for indicating the area where the first local picture is located.
5. The method of claim 4, wherein the visual indicator is a rectangular box with a reminder color.
6. The method according to claim 1, wherein the method further comprises:
creating a panoramic image carrier, wherein the panoramic image carrier is used for drawing a panoramic image of the receiving end equipment;
the rendering the first partial image onto the panoramic image of the receiving end device includes:
drawing the first partial image at a corresponding position of the panoramic image carrier;
and updating and displaying the panoramic image.
7. The utility model provides a virtual reality picture sharing device which characterized in that includes:
the receiving and transmitting module is used for receiving data of a first partial image from the transmitting end equipment and the gesture and the view angle of the transmitting end equipment corresponding to the first partial image;
the processing module is used for determining the direction and the size of the first partial image in the panoramic image of the receiving end equipment according to the gesture and the view angle of the sending end equipment;
a display module, configured to render the first partial image onto the panoramic image of the receiving end device according to the direction and the size of the first partial image in the panoramic image of the receiving end device;
the processing module is further used for preprocessing the current panoramic image of the receiving end equipment; the pretreatment comprises the following steps: fading or darkening or blurring the original panoramic image;
the display module is further configured to display the current panoramic image of the receiving end device in the receiving end device.
8. A computing device, comprising:
a memory for storing program instructions;
a processor for invoking program instructions stored in said memory and for performing the method according to any of claims 1-6 in accordance with the obtained program instructions.
9. A computer readable storage medium comprising computer readable instructions which, when read and executed by a computer, cause the computer to perform the method of any one of claims 1 to 6.
CN202111107630.5A 2021-09-22 2021-09-22 Virtual reality picture sharing method and device Active CN113873313B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111107630.5A CN113873313B (en) 2021-09-22 2021-09-22 Virtual reality picture sharing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111107630.5A CN113873313B (en) 2021-09-22 2021-09-22 Virtual reality picture sharing method and device

Publications (2)

Publication Number Publication Date
CN113873313A CN113873313A (en) 2021-12-31
CN113873313B true CN113873313B (en) 2024-03-29

Family

ID=78993059

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111107630.5A Active CN113873313B (en) 2021-09-22 2021-09-22 Virtual reality picture sharing method and device

Country Status (1)

Country Link
CN (1) CN113873313B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114972642A (en) * 2022-05-25 2022-08-30 北京有竹居网络技术有限公司 Method, apparatus, device and storage medium for three-dimensional modeling

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3112985A1 (en) * 2015-06-30 2017-01-04 Nokia Technologies Oy An apparatus for video output and associated methods
CN106302427A (en) * 2016-08-09 2017-01-04 深圳市豆娱科技有限公司 Sharing method in reality environment and device
CN106385576A (en) * 2016-09-07 2017-02-08 深圳超多维科技有限公司 Three-dimensional virtual reality live method and device, and electronic device
CN106385587A (en) * 2016-09-14 2017-02-08 三星电子(中国)研发中心 Method, device and system for sharing virtual reality view angle
CN107888987A (en) * 2016-09-29 2018-04-06 华为技术有限公司 A kind of panoramic video player method and device
WO2018120657A1 (en) * 2016-12-27 2018-07-05 华为技术有限公司 Method and device for sharing virtual reality data
CN108632674A (en) * 2017-03-23 2018-10-09 华为技术有限公司 A kind of playback method and client of panoramic video
KR101922968B1 (en) * 2017-07-12 2018-11-28 주식회사 볼트홀 Live streaming method for virtual reality contents and system thereof
CN110494840A (en) * 2017-04-05 2019-11-22 三星电子株式会社 Electronic equipment and screen picture display methods for electronic equipment
CN111163306A (en) * 2018-11-08 2020-05-15 华为技术有限公司 A method and related device for VR video processing
CN111371985A (en) * 2019-09-17 2020-07-03 杭州海康威视系统技术有限公司 Video playing method and device, electronic equipment and storage medium
CN112989214A (en) * 2021-05-20 2021-06-18 湖北游游智游网络科技有限公司 Tourism information display method and related equipment

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10062208B2 (en) * 2015-04-09 2018-08-28 Cinemoi North America, LLC Systems and methods to provide interactive virtual environments
KR102349716B1 (en) * 2015-06-11 2022-01-11 삼성전자주식회사 Method for sharing images and electronic device performing thereof

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3112985A1 (en) * 2015-06-30 2017-01-04 Nokia Technologies Oy An apparatus for video output and associated methods
CN106302427A (en) * 2016-08-09 2017-01-04 深圳市豆娱科技有限公司 Sharing method in reality environment and device
CN106385576A (en) * 2016-09-07 2017-02-08 深圳超多维科技有限公司 Three-dimensional virtual reality live method and device, and electronic device
CN106385587A (en) * 2016-09-14 2017-02-08 三星电子(中国)研发中心 Method, device and system for sharing virtual reality view angle
CN107888987A (en) * 2016-09-29 2018-04-06 华为技术有限公司 A kind of panoramic video player method and device
WO2018120657A1 (en) * 2016-12-27 2018-07-05 华为技术有限公司 Method and device for sharing virtual reality data
CN108632674A (en) * 2017-03-23 2018-10-09 华为技术有限公司 A kind of playback method and client of panoramic video
CN110494840A (en) * 2017-04-05 2019-11-22 三星电子株式会社 Electronic equipment and screen picture display methods for electronic equipment
KR101922968B1 (en) * 2017-07-12 2018-11-28 주식회사 볼트홀 Live streaming method for virtual reality contents and system thereof
CN111163306A (en) * 2018-11-08 2020-05-15 华为技术有限公司 A method and related device for VR video processing
CN111371985A (en) * 2019-09-17 2020-07-03 杭州海康威视系统技术有限公司 Video playing method and device, electronic equipment and storage medium
CN112989214A (en) * 2021-05-20 2021-06-18 湖北游游智游网络科技有限公司 Tourism information display method and related equipment

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
360度全景拍摄:透过"蝇眼"看世界;德州仪器;《 中国电子商情(基础电子)》;20160912;全文 *
Developing Embodied Interactive Virtual Characters for Human-Subjects Studies;Kangsoo Kim等;《2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW)》;20200511;全文 *
多人虚拟现实服务器平台的设计与实现;许晓;《中国优秀硕士学位论文全文数据库》;20200315(第3期);全文 *

Also Published As

Publication number Publication date
CN113873313A (en) 2021-12-31

Similar Documents

Publication Publication Date Title
US10999412B2 (en) Sharing mediated reality content
US9933853B2 (en) Display control device, display control program, and display control method
US20160163063A1 (en) Mixed-reality visualization and method
EP3333808B1 (en) Information processing device
EP3757727A1 (en) Image re-projection for foveated rendering
US20190130631A1 (en) Systems and methods for determining how to render a virtual object based on one or more conditions
CN109510975B (en) Video image extraction method, device and system
CA2550512A1 (en) 3d videogame system
WO2015090421A1 (en) Method and system for providing information associated with a view of a real environment superimposed with a virtual object
US20190259198A1 (en) Systems and methods for generating visual representations of a virtual object for display by user devices
US20170061936A1 (en) Method of controlling head-mounted display system
CN112348841A (en) Virtual object processing method and device, electronic equipment and storage medium
KR102107706B1 (en) Method and apparatus for processing image
CN113873313B (en) Virtual reality picture sharing method and device
US10803652B2 (en) Image generating apparatus, image generating method, and program for displaying fixation point objects in a virtual space
US20070182730A1 (en) Stereoscopic image display apparatus and program
CN113470153B (en) Virtual scene rendering method and device and electronic equipment
JP2004356789A (en) Stereoscopic video image display apparatus and program
EP2624117A2 (en) System and method providing a viewable three dimensional display cursor
CN106126006A (en) A kind of data display method and device
CN115797579A (en) Image superposition method and device for three-dimensional map, electronic equipment and storage medium
EP4060611B1 (en) Image processing method and apparatus, electronic device and computer-readable storage medium
CN113750529A (en) Direction indicating method and device in game, electronic equipment and readable storage medium
US20250225686A1 (en) Image processing method and apparatus, electronic device, and storage medium
JP5520772B2 (en) Stereoscopic image display system and display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant