WO2023138559A1 - 虚拟现实交互方法、装置、设备和存储介质 - Google Patents
虚拟现实交互方法、装置、设备和存储介质 Download PDFInfo
- Publication number
- WO2023138559A1 WO2023138559A1 PCT/CN2023/072538 CN2023072538W WO2023138559A1 WO 2023138559 A1 WO2023138559 A1 WO 2023138559A1 CN 2023072538 W CN2023072538 W CN 2023072538W WO 2023138559 A1 WO2023138559 A1 WO 2023138559A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- scene
- terminal
- virtual
- picture
- edge
- Prior art date
Links
- 230000003993 interaction Effects 0.000 title claims abstract description 79
- 238000000034 method Methods 0.000 title claims abstract description 76
- 230000008569 process Effects 0.000 claims abstract description 28
- 230000004044 response Effects 0.000 claims abstract description 18
- 230000002452 interceptive effect Effects 0.000 claims description 90
- 230000033001 locomotion Effects 0.000 claims description 25
- 238000004590 computer program Methods 0.000 claims description 19
- 238000009877 rendering Methods 0.000 claims description 8
- 230000000295 complement effect Effects 0.000 claims description 5
- 230000000694 effects Effects 0.000 abstract description 30
- 238000010586 diagram Methods 0.000 description 10
- 238000001514 detection method Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 8
- 238000012545 processing Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000005070 sampling Methods 0.000 description 5
- 238000013507 mapping Methods 0.000 description 3
- 241001465382 Physalis alkekengi Species 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 239000013307 optical fiber Substances 0.000 description 2
- 230000000644 propagated effect Effects 0.000 description 2
- 239000004065 semiconductor Substances 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 241000252233 Cyprinus carpio Species 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Definitions
- the present disclosure relates to the technical field of Internet applications, for example, to a virtual reality interaction method, device, device and storage medium.
- Embodiments of the present disclosure provide a virtual reality interaction method, device, device, and storage medium, so as to enrich the display effect of virtual reality interaction.
- an embodiment of the present disclosure provides a virtual reality interaction method, including:
- Real-time scene images collected in real time are displayed at the terminal; wherein, the real-world scene images include a virtual model;
- an embodiment of the present disclosure provides a virtual reality interaction device, including:
- the first display module is configured to display a real-time scene picture collected in real time at the terminal; wherein, the real scene picture includes a virtual model;
- the first control module is configured to control the display screen of the terminal to switch from the real scene to the virtual scene when it is detected that the viewing angle of the terminal passes through the plane where the scene edge of the virtual model is located during the process of extending the scene edge of the virtual model toward the terminal;
- the second display module is configured to obtain an interactive instruction for the virtual scene, and display an interactive effect corresponding to the interactive instruction on the terminal.
- an embodiment of the present disclosure provides an electronic device, including a memory and a processor, the memory stores a computer program, and the processor implements the virtual reality interaction method provided in the first aspect of the embodiment of the present disclosure when executing the computer program.
- the embodiments of the present disclosure provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed by a processor, the virtual reality interaction method provided in the first aspect of the embodiments of the present disclosure is implemented.
- FIG. 1 is a schematic flowchart of a virtual reality interaction method provided by an embodiment of the present disclosure
- FIG. 2 is a schematic diagram of a virtual reality interaction effect provided by an embodiment of the present disclosure
- FIG. 3 is a schematic flowchart of an interaction process in a virtual scene provided by an embodiment of the present disclosure
- FIG. 4 is another schematic flowchart of an interaction process in a virtual scene provided by an embodiment of the present disclosure
- FIG. 5 is a schematic diagram of a screen clipped by a terminal near clipping plane provided by an embodiment of the present disclosure
- FIG. 6 is a schematic structural diagram of a virtual reality interaction device provided by an embodiment of the present disclosure.
- FIG. 7 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
- the term “comprise” and its variations are open-ended, ie “including but not limited to”.
- the term “based on” is “based at least in part on”.
- the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
- FIG. 1 is a schematic flowchart of a virtual reality interaction method provided by an embodiment of the present disclosure.
- This embodiment can be applied to an electronic device, and is suitable for a virtual reality interaction scene.
- the electronic device can be at least one of a smart phone, a smart watch, a desktop computer, a laptop computer, a virtual reality terminal, an augmented reality terminal, a wireless terminal, and a laptop portable computer.
- the following uses a terminal as an example to introduce.
- the method may include:
- the real scene picture refers to the picture collected by the terminal camera in real time, and the picture includes a virtual model.
- the virtual model may be a model composed of closed curves, and is used to distinguish real scene pictures from virtual scene pictures.
- the style of the virtual model can be a picture scroll, a closed curve of any shape, such as a square, a circle, or an ellipse.
- a virtual scene picture may be displayed inside the scene edge of the virtual model (that is, inside the closed curve), and a real scene picture may be displayed outside the scene edge of the virtual model (that is, outside the closed curve).
- the edge of the scene of the aforementioned virtual model can continuously expand toward the terminal.
- the terminal detects in real time whether it passes through the plane where the scene edge of the virtual model is located, and performs corresponding control operations based on the detection result.
- the terminal detects in real time whether it passes through the plane where the edge of the scene of the virtual model is located, and if it detects that the viewpoint of the terminal passes through the plane where the edge of the scene of the virtual model is located, then the current display screen of the control terminal is switched from the real scene to the virtual scene, that is, the current display interface of the terminal displays the picture under the virtual scene.
- the viewing point of the terminal passes through the plane where the edge of the scene of the virtual model is located, and the viewing point may also be referred to as a virtual camera for drawing the virtual scene.
- the viewing angle of the terminal passes through the plane where the scene edge of the virtual model is located, then keep the terminal in the state of the real scene, that is, the current viewing angle of the terminal is still in the real scene, and the current display interface still displays the picture in the real scene.
- the above-mentioned process of detecting that the viewing point of the terminal passes through the plane where the scene edge of the virtual model is located may be as follows: the scene edge of the virtual model extends to completely cover the display screen, that is, the scene edge of the virtual model continues to expand toward the terminal direction.
- the above-mentioned scene where the viewpoint of the detection terminal passes through the virtual model may include the following steps:
- Step a according to the world coordinate position of the scene edge of the virtual model in the previous frame and the current frame, and the world coordinate position of the terminal in the current frame, determine whether the viewing angle of the terminal passes through the first result of the plane where the scene edge of the virtual model is located.
- the position of the terminal point of view (that is, the virtual camera) can be abstracted in advance as a vertex
- the scene edge of the virtual model can be abstracted as a plane
- four vertices are set on the scene edge of the virtual model
- the positions of any three vertices among the four vertices extending along the scene edge of the virtual model can be obtained to locate the plane.
- the world coordinate position of the scene edge of the virtual model in the previous frame can be understood as the world coordinate position of the scene edge at the time point of the previous frame.
- the world coordinate position of the scene edge of the virtual model in the current frame can be understood as the world coordinate position of the scene edge at the time point of the current frame.
- the position of the virtual camera of the terminal in the current frame can be understood as the world coordinate position of the virtual camera at the time point of the current frame.
- the first target point is determined by the world coordinate position of the scene edge of the virtual model in the previous frame, that is, the average value of the world coordinate positions of the four vertices on the scene edge of the previous frame virtual model
- the second target point is determined by the world coordinate position of the scene edge of the virtual model in the current frame, that is, the average value of the world coordinate positions of the four vertices on the scene edge of the virtual model in the current frame
- the first vector is determined based on the world coordinate position of the terminal-based virtual camera in the current frame and the world coordinate position of the first target point
- the second target point is determined based on the world coordinate position of the terminal-based virtual camera in the current frame and the second
- the world coordinate position of the target point determine the second vector; determine the normal vector of the plane where the edge of the scene of the virtual model is located, and use the dot product of the first vector and the normal vector to obtain the result of the first point product, and the point product of the second vector and the normal vector to obtain the result of the second point
- intersection point it is judged whether the intersection point falls in the same direction of the four sides formed by the four vertices. If it is in the same direction, it means that the intersection point falls in the quadrilateral, indicating that the first result is that the terminal’s viewing point passes through the plane where the scene edge of the virtual model is located; Certainly, if there is no intersection between the plane formed by the virtual camera of the terminal and the four vertices on the edge of the scene of the virtual model, it indicates that the first result is that the viewing angle of the terminal does not pass through the plane where the edge of the scene of the virtual model is located.
- the plane normal of the plane formed by these four vertices may also be determined, and the direction of the virtual camera movement vector of the terminal is compared with the plane normal to obtain from which direction the terminal's viewpoint passes through the plane where the scene edge of the virtual model is located (that is, from inside to outside or from outside to inside).
- Step b According to the world coordinate position of the terminal in the previous frame and the current frame, and the world coordinate position of the scene edge of the virtual model in the current frame, determine whether the terminal's viewpoint passes through the second result of the plane where the scene edge of the virtual model is located.
- the movement vector of the virtual camera is obtained, and based on the world coordinate position of the scene edge of the virtual model in the current frame, determine whether there is an intersection point between the plane formed by the line where the movement vector of the virtual camera is located and the four vertices on the scene edge of the virtual model, and whether the intersection point falls in the same direction of the four sides formed by the four vertices. If these two conditions are met at the same time, it can be determined that the second result is that the terminal's viewpoint passes through the plane where the scene edge of the virtual model is located; if at least one of the two conditions is not satisfied, then it can be determined.
- the second result of determining is that the viewpoint of the terminal does not pass through the plane where the edge of the scene of the virtual model is located.
- the process of determining whether the plane formed by the straight line where the moving vector of the virtual camera is located and the four vertices on the edge of the scene of the virtual model has an intersection point may be: assuming that the straight line where the moving vector of the virtual camera is located is AB, and the intersection point between the straight line AB and the plane formed by the four vertices is C, and whether C is judged to be in AB.
- the terminal can determine the world coordinate position of the scene edge of the virtual model in the previous frame and the current frame, the world coordinate position of the terminal in the current frame, and the world coordinate position of the terminal in the previous frame and the current frame, and the world coordinate position of the scene edge of the virtual model in the current frame.
- SLAM Simultaneous Localization and Mapping
- Step c According to at least one of the first result and the second result, determine whether the terminal passes through the target result of the plane where the scene edge of the virtual model is located.
- the first result and the second result may be considered comprehensively to determine whether the viewing point of the terminal passes through the target result of the plane where the scene edge of the virtual model is located. That is to say, the target result of whether the viewing point of the terminal passes through the plane where the edge of the scene of the virtual model is determined may be determined according to the first result, or the target result of whether the viewing point of the terminal passes through the plane of the edge of the scene of the virtual model may be determined according to the second result, and the target result of whether the viewing point of the terminal passes through the plane of the edge of the scene of the virtual model may be determined based on the first result and the second result.
- the non-passed label 0 forward pass to -1, and reverse pass to 1, then if the first result is not 0, the first result can be determined as the target result of whether the terminal’s viewpoint passes through the plane where the scene edge of the virtual model is located;
- the virtual scene supports interactive functions, and the user can trigger the corresponding trigger operation.
- the terminal can display the corresponding interactive effect based on the interactive command, so that the user perceives the change of the virtual scene and enhances the interactivity with the virtual information. Exemplarily, as shown in FIG.
- the current interface of the terminal displays a real scene picture
- the real scene picture includes a virtual model
- the scene edge of the virtual model expands toward the terminal direction, that is, the scene edge of the virtual model spreads toward the terminal edge direction
- a partial virtual scene picture is displayed within the scene edge of the virtual model
- the partial virtual scene picture is obtained by performing screen uv sampling on the panoramic image under the virtual scene based on the scene edge of the virtual model.
- the display screen of the terminal is switched from the real scene to the virtual scene, that is, the current display interface of the terminal displays the virtual scene picture.
- the terminal can detect the user's interactive command, and display the corresponding interactive effect based on the interactive command.
- the display screen of the terminal is controlled to switch from the virtual scene to the real scene.
- the terminal can move a preset distance away from the virtual scene. During this process, it is detected in real time whether the terminal’s viewing angle passes through the plane where the scene edge of the virtual model is located. If it is detected again that the terminal’s viewing angle passes through the plane where the virtual model’s scene edge is located, the display screen of the terminal can be switched from the virtual scene to the real scene.
- the virtual reality interaction method displays real-time captured real scene pictures on the terminal.
- the real scene pictures include a virtual model, and the scene edge of the virtual model extends toward the terminal. If it is detected that the terminal’s viewpoint passes through the plane where the scene edge of the virtual model is located, the terminal is controlled to switch from the real scene to the virtual scene, and an interaction instruction for the virtual scene is obtained, and the terminal displays the interaction effect corresponding to the interaction instruction, thereby realizing a method of adding a virtual model to the real scene picture. When it is detected that the terminal’s viewpoint passes through the scene edge of the virtual model. When it is flat, the current display screen of the terminal is switched from the real scene to the virtual scene, and the service that can interact with the user in the virtual scene.
- the above solution enriches the display effect of virtual reality interaction, enhances the interactivity in the process of virtual reality interaction, and satisfies the personalized display needs of users for virtual information.
- the foregoing S103 may include:
- the rotation motion data may include the rotation direction and the rotation angle of the terminal.
- the terminal is provided with a corresponding sensor, such as a gyroscope, through which the rotational movement data of the terminal can be detected.
- the virtual scene picture corresponding to the rotational movement data in the virtual scene can be rendered based on the obtained rotational movement data, and the virtual scene picture is displayed on the terminal. In this way, with the rotation of the terminal, the user can browse the virtual scene in 360 degrees.
- the virtual scene picture may include dynamic objects and static objects.
- the static object refers to the object whose state does not change with the picture, such as green hills, houses and clouds, etc.
- the dynamic object refers to the object whose state can change with the picture, such as carp, fireworks and other objects.
- the objects in the above-mentioned virtual scene picture may be three-dimensional objects, based on this, in the above-mentioned real
- the above S302 may include:
- the first target object corresponding to the rotation motion data in the virtual scene may be determined based on the acquired rotation motion data.
- the depth information refers to the distance between the plane where the terminal camera is located and the surface of the first target object.
- the terminal may render the first target object based on the depth value of the first target object, and display the rendering result, so that the displayed rendering result can reflect a three-dimensional sense of space.
- the number of the first target object may be one or more.
- the above-mentioned virtual scene picture may also include interactive guidance information, such as dynamic guidance for small hands.
- Objects that can be interacted with in the virtual scene can be clearly known through the interaction guidance information, that is, which objects can be interacted with and which objects cannot be interacted with.
- the interactive object in the virtual scene may be a lantern, and a dynamic guiding hand is provided at a corresponding position of the lantern to indicate that the lantern supports interactive functions. In this way, the user can click the screen position where the lantern is located to light the lantern and realize the interaction with the virtual scene.
- the above-mentioned process of S103 may also be: in response to a trigger operation on the first second target object in the virtual scene, displaying the first interaction effect on the terminal; in response to a trigger operation on the Nth second target object in the virtual scene, synchronously displaying the first interaction effect and the second interaction effect on the terminal.
- the first interaction effect is different from the second interaction effect, and N is a natural number greater than 1.
- the second target object supports the user's interactive operation, the user can perform a trigger operation on the second target object, and the terminal displays the corresponding first interactive effect after acquiring the trigger operation on the second target object in the virtual scene.
- the first interactive effect can be realized through corresponding mapping technology or animation technology.
- the lantern is dark before the interaction, and the virtual scene picture is also dark.
- the user guides the small hand to perform a trigger operation on the lantern based on the dynamics in the virtual scene picture.
- the terminal controls the lantern to change from dark to bright and a blessing message appears, and the virtual scene picture also changes from dark to bright.
- the effect of changing the lantern from dark to bright and the image of the virtual scene from dark to bright can be realized through mapping technology, and the corresponding animation can be played at the same time to realize the effect of hanging blessing couplets from the lantern.
- the virtual scene picture may include multiple second target objects, and the user may sequentially perform trigger operations on the multiple second target objects to display corresponding first interactive effects.
- the terminal may display the first interactive effect and the second interactive effect synchronously.
- the first The interactive effect is different from the second interactive effect, and the second interactive effect can also be realized through corresponding texture technology or animation technology.
- the user can light each lantern separately.
- the terminal controls the fourth lantern to change from dark to bright and a $9 couplet hangs from the lantern, and the rain effect of "Fu" can also appear simultaneously.
- an animation with the rain effect of the word "Fu” can be pre-made, and after the trigger operation on the fourth lantern is detected, the animation with the rain effect of the word "Fu” will be played.
- the simultaneous appearance of the first interactive effect and the second interactive effect when the fourth lantern is clicked is just an example, and corresponding settings can be made based on requirements.
- how to determine whether the touch position interacts with the three-dimensional object can be as follows: obtain the screen touch position, the first position of the terminal in the three-dimensional space, and the second position of the second target object in the three-dimensional space; convert the screen trigger position into the three-dimensional space to obtain the third position, and determine the ray corresponding to the touch point based on the third position (here, the touch point can be converted into a corresponding ray based on the preset two depth values and the third position), normalize the ray, and obtain the unit vector of the ray; then, Determine the distance from the terminal to the second target object based on the first position and the second position, multiply the unit vector by the distance to obtain the target vector; obtain the coordinates of the arrival point of the target vector through the camera coordinates of the terminal, and determine whether the arrival point is in the space where the second target object is located based on the coordinates of the arrival point and the coordinates of the center
- the virtual scene picture corresponding to the rotation motion data can be displayed, so as to realize 360-degree browsing of the virtual scene, perceive the changes of the virtual scene, enrich the interaction mode, and meet the user's personalized display requirements for the virtual scene. Moreover, it can also interact with the objects in the virtual scene, displaying a more realistic interactive effect, enriching the display effect of the virtual scene, enhancing the fun of virtual reality interaction, and satisfying the user experience.
- the above process of S103 may also be:
- the current virtual scene picture is the picture to be mixed
- the current interactive picture is the picture to be mixed.
- the current interactive picture refers to the current picture in the interactive sequence frame, for example, the interactive sequence frame is a sequence frame of " ⁇ " character rain.
- the current virtual scene image is used as the image to be mixed, and the current interactive image is used as the image to be mixed.
- the image to be mixed and the image to be mixed are set in the opposite relationship, that is, the current virtual scene image is used as the image to be mixed, the current interactive image is used as the image to be mixed, and the current virtual scene image is used to perform a mixing operation on the current interactive image.
- the mixing operation process reference may be made to the mixing algorithm in the related art.
- setting the current virtual scene picture and the current interactive picture in an opposite relationship can make the final displayed target virtual scene picture more realistic and enrich the display effect of the virtual picture. For example, after all the lanterns are lit, the "Fu" character rain that appears in the virtual scene can be better integrated into the virtual scene picture, making the "Fu" character appear transparent.
- the camera will clip objects beyond the viewing frustum.
- the viewing frustum is a perspective camera that can see and render the shape of the area. Any object that is closer to the camera than the near clipping surface will not be rendered. Then, when the viewpoint of the terminal passes through the plane where the virtual model is located, as shown in Figure 5, that is, when the terminal switches from the real scene to the virtual scene, some virtual scene pictures may not be rendered due to the cropping near the clipping plane of the camera; To this end, the process can be processed with reference to the following embodiments.
- the method may further include: when the distance between the terminal's viewpoint and the plane where the scene edge of the virtual model is located meets a preset threshold, perform a completion operation on the current display screen to fill the screen clipped by the near clipping plane of the terminal.
- the above preset threshold may be determined based on the distance between the plane where the terminal viewing point is located and the near clipping plane of the camera viewing frustum. In this way, when the distance between the viewing point of the terminal and the plane where the scene edge of the virtual model is located is less than or equal to the preset threshold, there will be a problem of cropping the near clipping plane of the camera.
- the terminal can perform a completion operation on the current display screen on the screen, so as to fill in the screen clipped by the near clipping plane of the terminal, so that when the terminal switches from the real scene to the virtual scene, the virtual scene picture is displayed on the terminal screen, avoiding the problem that some virtual scene pictures are not rendered, and at the same time, the real scene is displayed on the terminal screen when the terminal switches from the virtual scene to the real scene screen, to avoid the problem that some real-world scenes are not rendered.
- the above-mentioned process of completing the current display image may include: determining a target filling image according to the relative positional relationship between the viewing point of the terminal and the plane where the edge of the scene of the virtual model is located; using the target filling image to perform a complementary operation on the current display image.
- the above relative positional relationship can reflect the direction in which the viewing point of the terminal passes through the plane where the edge of the scene of the virtual model is located.
- the four vertices that move with the scene edge of the virtual model, the quadrilateral formed by these four vertices perform screen uv sampling on the virtual scene picture, and determine the sampling result as the target filling picture. Then, use the target filling picture to complete the current display picture.
- the target filling picture can be used as a layer and set at the bottom of the current display picture, so as to realize the completion of the virtual scene picture that has been cropped due to the near cropping plane of the terminal camera.
- the quadrilateral formed by these four vertices performs screen uv sampling on the real scene picture, and determines the sampling result as the target filling picture. Then, use the target filling picture to complete the current display picture.
- the target filling picture can be used as a layer and set at the bottom of the current display picture, so as to realize the completion of the real scene picture that is cropped due to the near cropping plane of the terminal camera.
- the completion operation can be performed on the picture clipped by the near clipping plane of the terminal camera, so that the displayed picture meets expectations, that is, when the terminal switches from the real scene to the virtual scene, the virtual scene picture is displayed on the terminal screen to avoid the problem of showing part of the real scene picture; demand, thereby improving the display effect of virtual reality interaction.
- Fig. 6 is a schematic structural diagram of a virtual reality interaction device provided by an embodiment of the present disclosure. As shown in FIG. 6 , the device may include: a first display module 601 , a first control module 602 and a second display module 603 .
- the first display module 601 is configured to display real-time captured real-time scene pictures on the terminal; wherein, the real-world scene pictures include a virtual model;
- the first control module 602 is configured to control the display screen of the terminal to switch from a real scene to a virtual scene when it is detected that the viewpoint of the terminal passes through the plane where the scene edge of the virtual model is located during the process of the scene edge of the virtual model expanding toward the terminal;
- the second display module 603 is configured to obtain an interaction instruction for the virtual scene, and display an interaction effect corresponding to the interaction instruction on the terminal.
- the virtual reality interaction device displays real-time captured real scene pictures on the terminal.
- the real scene pictures include a virtual model, and the scene edge of the virtual model extends toward the terminal. If it is detected that the terminal’s viewpoint passes through the plane where the scene edge of the virtual model is located, the display screen of the terminal is controlled to switch from the real scene to the virtual scene, and an interactive command for the virtual scene is acquired, and the terminal displays an interactive effect corresponding to the interactive command, thereby realizing a scene where a virtual model is added to the real scene picture, and a scene where the terminal’s viewpoint passes through the virtual model is detected.
- the current display screen of the terminal is switched from the real scene to the virtual scene, and the service that can interact with the user in the virtual scene.
- the above scheme enriches the display effect of virtual reality interaction, enhances the interactivity in the process of virtual reality interaction, and satisfies the user's individuality for virtual information. display needs.
- the second display module 603 may include: a detection unit and a first display unit.
- the detection unit is configured to detect the rotational movement data of the terminal
- the first display unit is configured to display, on the terminal, a virtual scene picture corresponding to the rotation motion data under the virtual scene.
- the virtual scene picture includes interactive guidance information.
- the first display unit is configured to determine and display the virtual scene picture corresponding to the rotation motion data in the following manner: determine a first target object in the virtual scene corresponding to the rotation motion data; acquire depth information of the first target object; render the first target object according to the depth information, and display the rendering result on the terminal.
- the second display module 603 may further include: a second display unit and a third display unit.
- the second display unit is configured to display the first interactive effect on the terminal in response to a trigger operation on the first second target object in the virtual scene;
- the third display unit is configured to synchronously display the first interactive effect and the second interactive effect in response to a trigger operation for the Nth second target object in the virtual scene; wherein, the first interactive effect is different from the second interactive effect, and N is a natural number greater than 1.
- the second display module 603 may further include: a first acquisition unit, a second acquisition unit, a processing unit, and a fourth display unit.
- the first acquiring unit is configured to acquire interactive instructions for the virtual scene
- the second acquiring unit is configured to acquire the current virtual scene picture and the current interaction picture corresponding to the interaction instruction; wherein, the current virtual scene picture is the picture to be mixed, and the current interactive picture is the picture to be mixed;
- the processing unit is configured to use the current virtual scene picture to perform a mixing operation on the current interactive picture to obtain a target virtual scene picture;
- the fourth display unit is configured to display the target virtual scene picture.
- the device may further include: a second control module.
- the second control module is configured to control the display screen of the terminal to switch from the virtual scene to the real scene when it is detected again that the viewing angle of the terminal passes through the plane where the scene edge of the virtual model is located.
- the device may further include: a screen completion module.
- the screen completion module is configured to perform a completion operation on the current display screen to fill in the Fill in the picture clipped by the near clipping plane of the terminal.
- the screen completion module is configured to complete the current display screen in the following manner: determine the target filling screen according to the relative positional relationship between the viewing point of the terminal and the plane where the edge of the scene of the virtual model is located; use the target filling screen to complete the current display screen.
- the device may further include a detection module.
- the detection module is configured to detect that the viewpoint of the terminal passes through the plane where the scene edge of the virtual model is located;
- the detection module is configured to detect that the terminal's viewpoint passes through the plane where the scene edge of the virtual model is located in the following manner: according to the world coordinate position of the scene edge of the virtual model in the previous frame and the current frame, and the world coordinate position of the terminal in the current frame, determine whether the terminal's viewpoint passes through the first result of the plane where the scene edge of the virtual model is located; according to the world coordinate positions of the terminal in the previous frame and the current frame, and the world coordinate position of the virtual model's scene edge in the current frame, determine whether the terminal's viewpoint passes through the virtual model The second result of the plane where the edge of the scene is located; according to at least one of the first result and the second result, determine whether the viewing point of the terminal passes through the target result of the plane where the edge of the scene of the virtual model is located.
- the detection module is further configured to detect that the viewpoint of the terminal passes through the plane where the edge of the scene of the virtual model is located in the following manner: the edge of the scene of the virtual model extends to completely cover the display screen.
- FIG. 7 it shows a schematic structural diagram of an electronic device 700 suitable for implementing the embodiments of the present disclosure.
- the electronic equipment in the embodiments of the present disclosure may include mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (Portable Android Device, PAD), portable multimedia players (Portable Media Player, PMP), vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital televisions (that is, digital TVs) and desktop computers.
- PDA Personal Digital Assistant
- PMP portable multimedia players
- vehicle-mounted terminals such as vehicle-mounted navigation terminals
- fixed terminals such as digital televisions (that is, digital TVs) and desktop computers.
- the electronic device shown in FIG. 7 is just an example.
- an electronic device 700 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 701, and the processing device may perform various appropriate actions and processes according to a program stored in a read-only memory (Read Only Memory, ROM) 702 or a program loaded from a storage device 708 into a random access memory (Random Access Memory, RAM) 703.
- ROM Read Only Memory
- RAM Random Access Memory
- various programs and data necessary for the operation of the electronic device 700 are also stored.
- the processing device 701, ROM 702, and RAM 703 are connected to each other through a bus 704.
- An input/output (Input/Output, I/O) interface 705 is also connected to the bus 704 .
- the following devices can be connected to the I/O interface 705: an input device 706 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; an output device 707 including, for example, a liquid crystal display (Liquid Crystal Display, LCD), a speaker, a vibrator, etc.; a storage device 708 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 709.
- the communication means 709 may allow the electronic device 700 to communicate with other devices wirelessly or by wire to exchange data. While FIG. 7 shows electronic device 700 having various means, it should be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
- embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
- the computer program may be downloaded and installed from a network via communication means 709 , or from storage means 708 , or from ROM 702 .
- the processing device 701 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are performed.
- the above-mentioned computer-readable medium in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or a combination of the above two.
- the computer-readable storage medium may be, for example, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or a combination thereof.
- Examples of computer-readable storage media may include: an electrical connection having at least one lead, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (such as an Electronic Programmable Read Only Memory (EPROM) or flash memory), an optical fiber, a portable compact disc-read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or a suitable one of the foregoing. combination.
- a computer-readable storage medium may be a tangible medium containing or storing a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
- a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including electromagnetic signals, optical signals, or any suitable combination of the foregoing.
- a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
- the program code contained on the computer-readable medium can be transmitted by an appropriate medium, including: electric wire, optical cable, radio frequency (Radio Frequency, RF), etc., or a suitable combination of the above.
- the client and the server can communicate using currently known or future-developed network protocols such as HyperText Transfer Protocol (HyperText Transfer Protocol, HTTP), and can be interconnected with any form or medium of digital data communication (for example, a communication network).
- HTTP HyperText Transfer Protocol
- Examples of communication networks include local area networks (Local Area Network, LAN), wide area networks (Wide Area Network, WAN), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as currently known or future developed networks.
- the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
- the above-mentioned computer-readable medium carries at least one program.
- the electronic device displays a real-time scene picture collected in real time on the terminal; wherein, the real-world scene picture includes a virtual model;
- the extension process in response to detecting that the viewing angle of the terminal passes through the plane where the edge of the scene of the virtual model is located, control the display screen of the terminal to switch from the real scene to the virtual scene; acquire an interactive instruction for the virtual scene, and display an interactive effect corresponding to the interactive instruction on the terminal.
- Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including object-oriented programming languages such as Java, Smalltalk, C++, and conventional procedural programming languages such as the "C" language or similar programming languages.
- the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
- the remote computer can be connected to the user computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or, alternatively, can be connected to an external computer (e.g. via the Internet using an Internet Service Provider).
- LAN Local Area Network
- WAN Wide Area Network
- Internet Service Provider e.g. via the Internet using an Internet Service Provider
- each block in the flowchart or block diagram may represent a module, program segment, or a portion of code that includes at least one executable instruction for implementing a specified logical function.
- the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
- the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of the unit does not constitute a limitation of the unit itself under certain circumstances, for example, the first obtaining unit may also be described as "a unit for obtaining at least two Internet Protocol addresses".
- exemplary types of hardware logic components include: Field-Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASIC), Application Specific Standard Parts (ASSP), System on Chip (SOC), Complex Programmable Logic Device (Complex Programmable Logic Device) , CPLD) etc.
- FPGA Field-Programmable Gate Array
- ASIC Application Specific Integrated Circuit
- ASSP Application Specific Standard Parts
- SOC System on Chip
- Complex Programmable Logic Device Complex Programmable Logic Device
- CPLD Complex Programmable Logic Device
- a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
- a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
- a machine-readable medium may comprise an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a suitable combination of the foregoing.
- machine-readable storage media may include Include at least one wire based electrical connection, portable computer disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage device, magnetic storage device, or a suitable combination of the foregoing.
- RAM random access memory
- ROM read only memory
- EPROM or flash memory erasable programmable read only memory
- CD-ROM compact disk read only memory
- magnetic storage device or a suitable combination of the foregoing.
- an electronic device including a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
- Real-time scene images collected in real time are displayed at the terminal; wherein, the real-world scene images include a virtual model;
- a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the following steps are implemented:
- Real-time scene images collected in real time are displayed at the terminal; wherein, the real-world scene images include a virtual model;
- the virtual reality interaction device, device, and storage medium provided in the above embodiments can execute the virtual reality interaction method provided in any embodiment of the present disclosure, and have corresponding functional modules and beneficial effects for executing the method.
- the virtual reality interaction method provided in any embodiment of the present disclosure.
- a virtual reality interaction method including:
- Real-time scene images collected in real time are displayed at the terminal; wherein, the real-world scene images include a virtual model;
- the above virtual reality interaction method further comprising It includes: detecting the rotation motion data of the terminal; displaying a virtual scene picture corresponding to the rotation motion data under the virtual scene on the terminal.
- the virtual scene picture includes interactive guidance information.
- the above virtual reality interaction method further comprising: determining a first target object in a virtual scene corresponding to the rotational motion data; acquiring depth information of the first target object; rendering the first target object according to the depth information, and displaying a rendering result on the terminal.
- the above virtual reality interaction method further comprising: displaying a first interaction effect on the terminal in response to a trigger operation on a first second target object in the virtual scene; in response to a trigger operation on an Nth second target object in the virtual scene, synchronously displaying the first interaction effect and a second interaction effect on the terminal; wherein, the first interaction effect is different from the second interaction effect, and N is a natural number greater than 1.
- the above virtual reality interaction method further comprising: acquiring an interaction command for the virtual scene; acquiring a current virtual scene picture and a current interaction picture corresponding to the interaction command; wherein, the current virtual scene picture is a mixed picture, and the current interactive picture is a picture to be mixed; using the current virtual scene picture to perform a mixing operation on the current interactive picture to obtain a target virtual scene picture; and displaying the target virtual scene picture on the terminal.
- the above virtual reality interaction method is provided, further comprising: in response to detecting again that the viewpoint of the terminal passes through the plane where the edge of the scene of the virtual model is located, controlling the display screen of the terminal to switch from the virtual scene to the real scene.
- the above virtual reality interaction method further comprising: when the distance between the viewpoint of the terminal and the plane where the edge of the scene of the virtual model is located satisfies a preset threshold, performing a completion operation on the current display screen to fill the screen clipped by the near clipping plane of the terminal.
- the above virtual reality interaction method further comprising: determining a target filling picture according to the relative positional relationship between the viewing point of the terminal and the plane where the edge of the scene of the virtual model is located; using the target filling picture to perform a complement operation on the current display picture.
- the above virtual reality interaction method further comprising: according to the world coordinate position of the scene edge of the virtual model in the previous frame and the current frame, and the world coordinate position of the terminal in the current frame, determining whether the terminal's viewpoint passes through the first result of the plane where the scene edge of the virtual model is located; according to the world coordinate positions of the terminal in the previous frame and the current frame, and the world coordinate position of the scene edge of the virtual model in the current frame, determining whether the terminal's viewpoint passes through the plane where the scene edge of the virtual model is located A second result; according to at least one of the first result and the second result, a target result of determining whether the viewing angle of the terminal passes through the plane where the scene edge of the virtual model is located.
- the above virtual reality interaction method is provided, further comprising: extending the edge of the scene of the virtual model to completely cover the display screen.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
本公开实施例公开了一种虚拟现实交互方法、装置、设备和存储介质。该方法包括:在终端展示实时采集的现实场景画面;其中,所述现实场景画面中包括有虚拟模型;在所述虚拟模型的场景边缘向终端方向扩展的过程中,响应于检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面,控制终端的展示画面从现实场景切换到虚拟场景;获取针对所述虚拟场景下的互动指令,在所述终端展示与所述互动指令对应的互动效果。
Description
本公开要求在2022年1月21日提交中国专利局、申请号为202210074599.8的中国专利申请的优先权,以上申请的全部内容通过引用结合在本公开中。
本公开涉及互联网应用技术领域,例如涉及一种虚拟现实交互方法、装置、设备和存储介质。
随着互联网技术的不断发展,网络上出现了各种有趣的特效应用,用户可以选择相应的特效应用进行视频的拍摄。但是,相关技术的特效应用的形式比较单一,可交互性差,无法满足用户的个性化交互需求。
发明内容
本公开实施例提供一种虚拟现实交互方法、装置、设备和存储介质,以丰富虚拟现实交互的展示效果。
第一方面,本公开实施例提供了一种虚拟现实交互方法,包括:
在终端展示实时采集的现实场景画面;其中,所述现实场景画面中包括有虚拟模型;
在所述虚拟模型的场景边缘向终端方向扩展的过程中,响应于检测到所述终端的视角点穿过所述虚拟模型的场景边缘的所在的平面,控制所述终端的展示画面从现实场景切换到虚拟场景;
获取针对所述虚拟场景下的互动指令,在所述终端展示与所述互动指令对应的互动效果。
第二方面,本公开实施例提供一种虚拟现实交互装置,包括:
第一展示模块,设置为在终端展示实时采集的现实场景画面;其中,所述现实场景画面中包括有虚拟模型;
第一控制模块,设置为在虚拟模型的场景边缘向终端方向扩展的过程中,在检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面时,控制所述终端的展示画面从现实场景切换到虚拟场景;
第二展示模块,设置为获取针对所述虚拟场景下的互动指令,在所述终端展示与所述互动指令对应的互动效果。
第三方面,本公开实施例提供了一种电子设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现本公开实施例第一方面提供的虚拟现实交互方法。
第四方面,本公开实施例提供了一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现本公开实施例第一方面提供的虚拟现实交互方法。
贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例提供的虚拟现实交互方法的一种流程示意图;
图2为本公开实施例提供的虚拟现实交互效果的一种示意图;
图3为本公开实施例提供的虚拟场景下互动过程的一种流程示意图;
图4为本公开实施例提供的虚拟场景下互动过程的另一种流程示意图;
图5为本公开实施例提供的被终端近裁剪面裁剪后的画面的一种示意图;
图6为本公开实施例提供的虚拟现实交互装置的一种结构示意图;
图7为本公开实施例提供的电子设备的一种结构示意图。
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例。应当理解的是,本公开的附图及实施例仅用于示例性作用。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性的,本领
域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
图1为本公开实施例提供的虚拟现实交互方法的一种流程示意图。本实施例可以应用于电子设备中,适用于虚拟现实交互场景,该电子设备可以为智能手机、智能手表、台式电脑、手提电脑、虚拟现实终端、增强现实终端、无线终端和膝上型便携计算机等设备中的至少一种,以下以一种终端为例进行介绍。如图1所示,该方法可以包括:
S101、在终端展示实时采集的现实场景画面。
现实场景画面是指由终端摄像头实时采集的画面,画面中包括有虚拟模型。该虚拟模型可以为由闭合曲线所构成的模型,用于区分现实场景画面和虚拟场景画面。例如,该虚拟模型的样式可以为画卷、任意形状的闭合曲线,如正方形、圆形或者椭圆形等。可选地,该虚拟模型的场景边缘内部(即闭合曲线以内)可以展示有虚拟场景画面,虚拟模型的场景边缘外部(即闭合曲线以外)可以展示有现实场景画面。
上述虚拟模型的场景边缘可以不断向终端方向扩展。在此过程中,终端实时检测是否穿过虚拟模型的场景边缘所在平面,基于检测结果执行相应的控制操作。
S102、在所述虚拟模型的场景边缘向终端方向扩展的过程中,响应于检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面,控制所述终端的展示画面从现实场景切换到虚拟场景。
在虚拟模型的场景边缘向终端方向扩展的过程中,终端实时检测是否穿过虚拟模型的场景边缘所在的平面,如果检测到终端的视角点穿过虚拟模型的场景边缘所在的平面,则控制终端的当前展示画面从现实场景切换到虚拟场景,即终端当前显示界面显示虚拟场景下的画面。此处的终端的视角点穿过虚拟模型的场景边缘所在的平面,该视角点也可以称为用于绘制虚拟场景的虚拟相机。如果未检测到终端的视角点穿过虚拟模型的场景边缘所在的平面,则保持终端处于现实场景的状态,即终端的当前视角仍处于现实场景,当前显示界面仍显示现实场景下的画面。
作为一种可选地实施方式,上述检测终端的视角点穿过虚拟模型的场景边缘所在的平面的过程可以为:虚拟模型的场景边缘扩展至完全覆盖所述展示画面,即虚拟模型的场景边缘不断向终端方向扩展,在扩展至完全覆盖当前展示画面时,可以认为终端的视角点穿过虚拟模型的场景边缘所在的平面,反之,则确定终端没有穿过虚拟模型的场景边缘所在的平面。
作为另一种可选地实施方式,上述检测终端的视角点穿过虚拟模型的场景
边缘所在的平面的过程可以包括以下步骤:
步骤a、根据所述虚拟模型的场景边缘在上一帧和当前帧的世界坐标位置,及所述终端在当前帧的世界坐标位置,确定所述终端的视角点是否穿过所述虚拟模型的场景边缘所在平面的第一结果。
示例性地,可以预先将终端视角点(即虚拟相机)的位置抽象为一个顶点,虚拟模型的场景边缘抽象为一个平面,在虚拟模型的场景边缘上设置四个顶点,获取跟随虚拟模型的场景边缘扩展的这四个顶点中任意三个顶点的位置,即可定位此平面。虚拟模型的场景边缘在上一帧的世界坐标位置可以理解为场景边缘在上一帧所在时间点的世界坐标位置,同理,虚拟模型的场景边缘在当前帧的世界坐标位置可以理解为场景边缘在当前帧所在时间点的世界坐标位置,终端的虚拟相机在当前帧的位置可以理解为虚拟相机在当前帧所在时间点的世界坐标位置。
基于此,通过虚拟模型的场景边缘在上一帧的世界坐标位置,即上一帧虚拟模型的场景边缘上的四个顶点的世界坐标位置的平均值,确定第一目标点,通过虚拟模型的场景边缘在当前帧的世界坐标位置,即当前帧虚拟模型的场景边缘上的四个顶点的世界坐标位置的平均值,确定第二目标点;接着,基于终端的虚拟相机在当前帧的世界坐标位置和第一目标点的世界坐标位置,确定第一向量,基于终端的虚拟相机在当前帧的世界坐标位置和第二目标点的世界坐标位置,确定第二向量;确定虚拟模型的场景边缘所在平面的法向向量,使用第一向量与法向向量点乘,得到第一点乘结果,第二向量与法向向量点乘,得到第二点乘结果;若第一点乘结果和第二点乘结果异号(此处的异号是指第一点乘结果为正,第二点乘结果为负,或者第一点乘结果为负,第二点乘结果为正),则确定在虚拟模型的场景边缘向终端方向扩展的过程中,终端的虚拟相机与虚拟模型的场景边缘上的这四个顶点所形成的平面存在交点。并且,在存在交点的情况下,判断该交点是否落在这四个顶点连成的四条边的同一方向,如果是同一方向则说明该交点落在四边形内,表明第一结果为终端的视角点穿过虚拟模型的场景边缘所在的平面;反之,如果不在同一方向则说明该交点不在四边形内,表明第一结果为终端的视角点没有穿过虚拟模型的场景边缘所在的平面。当然,如果终端的虚拟相机与虚拟模型的场景边缘上的这四个顶点所形成的平面不存在交点,则表明第一结果为终端的视角点没有穿过虚拟模型的场景边缘所在的平面。
在一实施例中,还可以确定这四个顶点所形成平面的平面法线,将终端的虚拟相机移动向量的方向与平面法线进行对比,得到终端的视角点是从哪个方向穿过虚拟模型的场景边缘所在的平面(即从里到外还是从外到里)。
步骤b、根据所述终端在所述上一帧和所述当前帧的世界坐标位置,及所述虚拟模型的场景边缘在所述当前帧的世界坐标位置,确定所述终端的视角点是否穿过所述虚拟模型的场景边缘所在的平面的第二结果。
基于终端的虚拟相机在上一帧和当前帧的世界坐标位置,得到虚拟相机的移动向量,基于虚拟模型的场景边缘在当前帧的世界坐标位置,确定虚拟相机的移动向量所在直线和虚拟模型的场景边缘上的这四个顶点所形成的平面是否存在交点以及交点是否落在这四个顶点连成的四条边的同一方向,这两个条件同时满足,则可以确定第二结果为终端的视角点穿过虚拟模型的场景边缘所在的平面;这两个条件中至少一个条件不满足,则可以确定第二结果为终端的视角点没有穿过虚拟模型的场景边缘所在的平面。其中,确定虚拟相机的移动向量所在直线和虚拟模型的场景边缘上的这四个顶点所形成的平面是否存在交点的过程可以是:假设虚拟相机的移动向量所在直线为AB,以及直线AB与这四个顶点所形成平面的交点为C,判断C是否在AB内,若在,则说明存在交点,若不存在,则说明不存在交点。
需要说明的是,终端可以基于同步定位与建图(Simultaneous Localization and Mapping,SLAM)算法确定虚拟模型的场景边缘在上一帧和当前帧的世界坐标位置,终端在当前帧的世界坐标位置,以及终端在上一帧和当前帧的世界坐标位置,虚拟模型的场景边缘在当前帧的世界坐标位置。
步骤c、根据所述第一结果和所述第二结果中的至少一,确定所述终端是否穿过所述虚拟模型的场景边缘所在平面的目标结果。
在得到第一结果和第二结果之后,可以综合考虑第一结果和第二结果,确定终端的视角点是否穿过虚拟模型的场景边缘所在平面的目标结果。也就是说,可以根据第一结果确定终端的视角点是否穿过虚拟模型的场景边缘所在平面的目标结果,也可以根据第二结果确定终端的视角点是否穿过虚拟模型的场景边缘所在平面的目标结果,还可以基于第一结果和第二结果确定终端的视角点是否穿过虚拟模型的场景边缘所在平面的目标结果。可选地,设置未穿过的标签为0,正向穿过为-1,反向穿过为1,则在第一结果不为0的情况下,可以将第一结果确定为终端的视角点是否穿过虚拟模型的场景边缘所在平面的目标结果,在第一结果为0的情况下,可以将第二结果确定为终端的视角点是否穿过虚拟模型的场景边缘所在平面的目标结果。
S103、获取针对所述虚拟场景下的互动指令,在所述终端展示与所述互动指令对应的互动效果。
在终端的视角切换至虚拟场景下后,虚拟场景支持互动功能,用户可以触发相应的触发操作,在获取到用户针对虚拟场景下的互动指令后,终端可以基于该互动指令展示相应的互动效果,从而使得用户感知到虚拟场景的变化,增强了与虚拟信息的可交互性。示例性地,如图2所示,终端当前界面显示现实场景画面,该现实场景画面中包括有虚拟模型,虚拟模型的场景边缘向终端方向扩展,也即虚拟模型的场景边缘向终端边缘方向扩散,虚拟模型的场景边缘以内显示有局部虚拟场景画面,该局部虚拟场景画面以虚拟模型的场景边缘为基准,通过对虚拟场景下的全景图进行屏幕uv采样得到。随着虚拟模型的场景边
缘不断向终端方向扩展,终端的视角点不断向虚拟场景推进,在检测到终端的视角点穿过虚拟模型的场景边缘所在的平面时,终端的展示画面由现实场景切换到虚拟场景,即终端当前显示界面显示虚拟场景画面。进入虚拟场景后,终端可以检测用户的互动指令,基于该互动指令展示相应的互动效果。
可选地,如果再次检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面,控制所述终端的展示画面从所述虚拟场景切换到所述现实场景。
在实际应用中,如果要退出虚拟场景,终端可以沿着远离虚拟场景的方向移动预设距离,在此过程中,实时检测终端的视角点是否穿过虚拟模型的场景边缘所在的平面,如果再次检测到终端的视角点穿过虚拟模型的场景边缘所在的平面,则终端的展示画面可以由虚拟场景切换到现实场景,即切换前终端显示界面显示虚拟场景画面,切换后终端显示界面显示现实场景下的画面。
本公开实施例提供的虚拟现实交互方法,在终端展示实时采集的现实场景画面,该现实场景画面中包括有虚拟模型,虚拟模型的场景边缘向终端方向扩展,如果检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面,控制所述终端从现实场景切换到虚拟场景,以及获取针对所述虚拟场景下的互动指令,在终端展示与所述互动指令对应的互动效果,从而实现了一种在现实场景画面中添加虚拟模型,在检测到终端的视角点穿过虚拟模型的场景边缘所在的平面时,终端的当前展示画面由现实场景切换到虚拟场景,并且在虚拟场景下能够与用户进行互动的业务。上述方案丰富了虚拟现实交互的展示效果,增强了虚拟现实交互过程中的互动性,满足了用户针对虚拟信息的个性化展示需求。
在终端的视角进入虚拟场景后,用户可以360度浏览虚拟场景。在上述实施例的基础上,可选地,如图3所示,上述S103可以包括:
S301、检测所述终端的旋转运动数据。
例如,旋转运动数据可以包括终端的旋转方向以及旋转角度。终端中设置有相应的传感器,比如陀螺仪,通过该传感器可以检测终端的旋转运动数据。
S302、在所述终端展示所述虚拟场景下与所述旋转运动数据对应的虚拟场景画面。
在检测到终端发生旋转后,即终端的视角在虚拟场景下发生了变化,此时可以基于获取到的旋转运动数据,渲染虚拟场景下与旋转运动数据对应的虚拟场景画面,并在终端展示该虚拟场景画面。这样,随着终端的旋转,用户便可以360度浏览虚拟场景。
可选地,该虚拟场景画面中可以包括动态对象和静态对象。其中,静态对象是指状态不随画面发生变化的对象,如青山、房子以及云朵等对象,动态对象是指状态可以随画面发生变化的对象,如鲤鱼、烟花等对象。
可选地,上述虚拟场景画面中的对象可以是三维对象,基于此,在上述实
施例的基础上,可选地,上述S302可以包括:
S3021、确定与所述旋转运动数据对应的虚拟场景下的第一目标对象。
在检测到终端发生旋转后,即终端的视角在虚拟场景下发生了变化,此时可以基于获取到的旋转运动数据,确定虚拟场景下与旋转运动数据对应的第一目标对象。
S3022、获取所述第一目标对象的深度信息。
深度信息是指终端相机所在平面与第一目标对象的表面之间的距离。
S3023、根据所述深度信息对所述第一目标对象进行渲染,在所述终端展示渲染结果。
在得到第一目标对象的深度信息之后,终端便可以基于第一目标对象的深度值对第一目标对象进行渲染,并展示渲染结果,从而使得展示出的渲染结果能够体现出空间立体感。
需要说明的是,第一目标对象的数量可以为一个或多个。
可选地,上述虚拟场景画面中还可以包括交互引导信息,例如动态引导小手。通过该交互引导信息可以清楚地获知虚拟场景中能够进行交互的对象,即获知哪些对象可以进行交互,哪些对象无法进行交互。例如,虚拟场景画面中能够进行交互的对象可以是灯笼,在灯笼相应的位置处设置有动态引导小手,以指示该灯笼支持交互功能。这样,用户可以点击灯笼所在的屏幕位置,从而点亮灯笼,实现与虚拟场景的交互。
基于此,在上述实施例的基础上,可选地,上述S103的过程还可以为:响应于针对所述虚拟场景下的第一个第二目标对象的触发操作,在所述终端展示第一互动效果;响应于针对所述虚拟场景下的第N个第二目标对象的触发操作,在所述终端同步展示所述第一互动效果和第二互动效果。
其中,所述第一互动效果与所述第二互动效果不同,N为大于1的自然数。第二目标对象支持用户的交互操作,用户可以对第二目标对象执行触发操作,终端在获取到针对虚拟场景下的第二目标对象的触发操作之后,展示相应的第一互动效果。该第一互动效果可以通过相应的贴图技术或者动画技术实现。仍以第二目标对象为灯笼为例,交互前灯笼是暗色的,虚拟场景画面也是暗色的,用户基于虚拟场景画面中的动态引导小手对灯笼执行触发操作,在获取到针对灯笼的触发操作之后,终端控制灯笼由暗变亮且出现祝福语,虚拟场景画面也由暗变亮。例如,可以通过贴图技术来实现灯笼由暗变亮,虚拟场景画面由暗变亮的效果,同时播放相应的动画,以实现从灯笼中悬挂出祝福联的效果。
在实际应用中,上述虚拟场景画面中可以包括多个第二目标对象,用户可以依次对多个第二目标对象执行触发操作,以展示相应的第一互动效果。为了丰富虚拟场景的画面,可选地,在获取到针对虚拟场景下的第N个第二目标对象的触发操作时,终端可以同步展示第一互动效果和第二互动效果。其中,第一
互动效果和第二互动效果不相同,第二互动效果也可以通过相应的贴图技术或者动画技术实现。
仍以第二目标对象为灯笼为例,假设虚拟场景中包括4个灯笼,用户可以分别点亮每个灯笼,在检测到对第4个灯笼的触发操作时,终端控制第4个灯笼由暗变亮且从灯笼中悬挂出祝福联,还可以同步出现“福”字雨效果。例如,可以预先制作好“福”字雨效果的动画,在检测到对第4个灯笼的触发操作后,播放“福”字雨效果的动画。当然,上述在点击第4个灯笼同步出现第一互动效果和第二互动效果仅是示例,可以基于需求进行相应的设置。
考虑到虚拟场景下的对象为三维对象,在终端屏幕中触摸一位置后,如何判断触摸位置是否与三维对象交互的过程可以为:获取屏幕触摸位置、终端在三维空间中的第一位置以及第二目标对象在三维空间中的第二位置;将屏幕触发位置转换到三维空间中,得到第三位置,基于该第三位置确定触摸点对应的射线(此处,可以基于预先设置的两个深度值和第三位置,将触摸点转换成对应的射线),归一化射线,得到射线的单位向量;接着,基于上述第一位置和第二位置,确定终端到第二目标对象的距离,将上述单位向量与该距离进行相乘,得到目标向量;通过终端的相机坐标获取目标向量到达点的坐标,基于该到达点的坐标以及第二目标对象的圆心的坐标,确定该到达点是否在第二目标对象所在的空间内,若是,则确定屏幕中的触摸点与第二目标对象进行了交互,若否,则确定屏幕中的触摸点未与第二目标对象交互。
通过上述判断方式,实现了对屏幕触摸点与虚拟场景中三维物体交互的精准判断,提高了定位结果的准确性,进而提高了虚拟现实交互的准确性。
在本实施例中,可以基于终端的旋转运动数据,展示与旋转运动数据对应的虚拟场景画面,从而实现360度浏览虚拟场景,感知虚拟场景的变化,丰富了交互方式,满足了用户针对虚拟场景的个性化展示需求。并且,还能够与虚拟场景画面中的物体进行交互,展示出更加真实的互动效果,丰富了虚拟场景画面展示效果,增强了虚拟现实交互的趣味性,满足了用户体验。
在一个实施例中,为了提高虚拟画面的展示效果,例如,为了使虚拟画面中出现的“福”字雨能够更好地融合到虚拟场景画面中,使“福”字呈现出透明效果。在上述实施例的基础上,可选地,如图4所示,上述S103的过程还可以为:
S401、获取针对所述虚拟场景下的互动指令。
S402、获取当前虚拟场景画面及与所述互动指令对应的当前互动画面。
其中,所述当前虚拟场景画面为被混合画面,所述当前互动画面为待混合画面。当前互动画面是指互动序列帧中的当前画面,比如互动序列帧为“福”字雨序列帧。
S403、使用所述当前虚拟场景画面对所述当前互动画面进行混合操作,得
到目标虚拟场景画面。
通常,在进行混合操作时,将当前虚拟场景画面作为被混合画面,当前互动画面作为待混合画面。但是,在本实施例中,将被混合画面和待混合画面设置为相反的关系,即将当前虚拟场景画面作为待混合画面,当前互动画面作为被混合画面,使用当前虚拟场景画面对当前互动画面进行混合操作。混合操作过程可以参照相关技术的混合算法。
S404、在所述终端展示所述目标虚拟场景画面。
在本实施例中,在进行多层画面混合操作时,将当前虚拟场景画面与当前互动画面设置成相反的关系,可以使得最终所展示的目标虚拟场景画面更具有真实感,丰富了虚拟画面的展示效果。例如,在所有灯笼全部被点亮之后,虚拟场景中出现的“福”字雨能够更好地融合到虚拟场景画面中,使“福”字呈现出透明效果。
在实际应用中,由于终端相机的视野范围有限,相机会对超出视锥体内的物体进行裁剪,其中,视锥体为一个透视相机,可以看到和渲染的区域形状,比近裁剪面更靠近相机的任何物体都不会被渲染。那么在终端的视角点穿过虚拟模型所在平面的过程中,如图5所示,即终端从现实场景往虚拟场景切换过程中,由于相机近裁剪面的裁剪,可能会出现部分虚拟场景画面未被渲染出来的问题;或者终端从虚拟场景往现实场景切换过程中,由于相机近裁剪面的裁剪,也可能会出现部分现实场景画面未被渲染出来的问题。为此,可以参照下述实施例的过程进行处理,在上述实施例的基础上,可选地,该方法还可以包括:在终端的视角点与虚拟模型的场景边缘所在平面之间的距离满足预设阈值时,对当前显示画面进行补全操作,以填充被所述终端的近裁剪面裁剪掉的画面。
上述预设阈值可以基于终端视角点所在的平面到相机视锥体的近裁剪面之间的距离确定。这样,在终端的视角点与虚拟模型的场景边缘所在平面之间的距离小于等于预设阈值时,会出现相机近裁剪面裁剪的问题,此时终端可以对屏幕中当前显示画面进行补全操作,从而填充被终端近裁剪面裁剪掉的画面,使得在终端从现实场景往虚拟场景切换的过程中,终端屏幕中显示虚拟场景画面,避免出现部分虚拟场景画面未被渲染出来的问题,同时,使得在终端从虚拟场景往现实场景切换的过程中,终端屏幕中显示现实场景画面,避免出现部分现实场景画面未被渲染出来的问题。
作为一种可选地实施方式,上述对当前显示画面进行补全操作的过程可以为:根据所述终端的视角点与所述虚拟模型的场景边缘所在平面之间的相对位置关系,确定目标填充画面;使用所述目标填充画面对当前显示画面进行补全操作。
上述相对位置关系能够体现终端的视角点穿过虚拟模型的场景边缘所在平面的方向。在终端的视角点正向穿过虚拟模型的场景边缘所在平面时,也就是说,终端从现实场景切换到虚拟场景的过程中,按照在虚拟模型上所设置的跟
随虚拟模型的场景边缘移动的四个顶点,由这四个顶点所形成的四边形对虚拟场景画面进行屏幕uv采样,将采样结果确定为目标填充画面。接着,使用该目标填充画面对当前显示画面进行补全操作,例如可以将目标填充画面作为一个图层,设置在当前显示画面的底部,从而实现对因终端相机近裁剪面被裁剪掉的虚拟场景画面的补全。
在终端的视角点反向穿过虚拟模型的场景边缘所在平面时,也就是说,终端从虚拟场景切换到现实场景的过程中,按照在虚拟模型上所设置的跟随虚拟模型的场景边缘移动的四个顶点,由这四个顶点所形成的四边形对现实场景画面进行屏幕uv采样,将采样结果确定为目标填充画面。接着,使用该目标填充画面对当前显示画面进行补全操作,例如可以将目标填充画面作为一个图层,设置在当前显示画面的底部,从而实现对因终端相机近裁剪面被裁剪掉的现实场景画面的补全。
在本实施例中,在终端的视角点穿过虚拟模型的场景边缘所在平面的过程中,可以对被终端相机近裁剪面裁剪掉的画面进行补全操作,使得所展示的画面符合预期,即在终端从现实场景切换至虚拟场景时,终端屏幕中显示虚拟场景画面,避免呈现出部分现实场景画面的问题;在终端从虚拟场景切换至现实场景,终端屏幕中显示现实场景画面,避免呈现出部分虚拟场景画面的问题,使得最终所呈现的视觉效果更加贴近现实,符合预期需求,从而提高了虚拟现实交互的展示效果。
图6为本公开实施例提供的虚拟现实交互装置的一种结构示意图。如图6所示,该装置可以包括:第一展示模块601、第一控制模块602和第二展示模块603。
例如,第一展示模块601设置为在终端展示实时采集的现实场景画面;其中,所述现实场景画面中包括有虚拟模型;
第一控制模块602设置为在所述虚拟模型的场景边缘向终端方向扩展的过程中,在检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面时,控制所述终端的展示画面从现实场景切换到虚拟场景;
第二展示模块603设置为获取针对所述虚拟场景下的互动指令,在所述终端展示与所述互动指令对应的互动效果。
本公开实施例提供的虚拟现实交互装置,在终端展示实时采集的现实场景画面,该现实场景画面中包括有虚拟模型,该虚拟模型的场景边缘向终端方向扩展,如果检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面,控制终端的展示画面从现实场景切换到虚拟场景,以及获取针对所述虚拟场景下的互动指令,在终端展示与所述互动指令对应的互动效果,从而实现了一种在现实场景画面中添加虚拟模型,在检测到终端的视角点穿过虚拟模型的场景边缘所在的平面时,终端的当前展示画面由现实场景切换到虚拟场景,并且在虚拟场景下能够与用户进行互动的业务。上述方案丰富了虚拟现实交互的展示效果,增强了虚拟现实交互过程中的互动性,满足了用户针对虚拟信息的个性
化展示需求。
在上述实施例的基础上,可选地,第二展示模块603可以包括:检测单元和第一展示单元。
例如,检测单元设置为检测所述终端的旋转运动数据;
第一展示单元设置为在所述终端展示所述虚拟场景下与所述旋转运动数据对应的虚拟场景画面。
可选地,所述虚拟场景画面中包括交互引导信息。
在上述实施例的基础上,可选地,第一展示单元设置为通过以下方式确定展示与旋转运动数据对应的虚拟场景画面:确定与所述旋转运动数据对应的虚拟场景下的第一目标对象;获取所述第一目标对象的深度信息;根据所述深度信息对所述第一目标对象进行渲染,在所述终端展示渲染结果。
在上述实施例的基础上,可选地,第二展示模块603还可以包括:第二展示单元和第三展示单元。
例如,第二展示单元设置为响应于针对所述虚拟场景下的第一个第二目标对象的触发操作,在所述终端展示第一互动效果;
第三展示单元设置为响应于针对所述虚拟场景下的第N个第二目标对象的触发操作,同步展示所述第一互动效果和第二互动效果;其中,所述第一互动效果与所述第二互动效果不同,N为大于1的自然数。
在上述实施例的基础上,可选地,第二展示模块603还可以包括:第一获取单元、第二获取单元、处理单元和第四展示单元。
例如,第一获取单元设置为获取针对所述虚拟场景下的互动指令;
第二获取单元设置为获取当前虚拟场景画面及与所述互动指令对应的当前互动画面;其中,所述当前虚拟场景画面为被混合画面,所述当前互动画面为待混合画面;
处理单元设置为使用所述当前虚拟场景画面对所述当前互动画面进行混合操作,得到目标虚拟场景画面;
第四展示单元设置为展示所述目标虚拟场景画面。
在上述实施例的基础上,可选地,该装置还可以包括:第二控制模块。
例如,第二控制模块设置为在再次检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面时,控制所述终端的展示画面从所述虚拟场景切换到所述现实场景。
在上述实施例的基础上,可选地,该装置还可以包括:画面补全模块。
例如,画面补全模块设置为在所述终端的视角点与所述虚拟模型的场景边缘所在平面之间的距离满足预设阈值时,对当前显示画面进行补全操作,以填
充被所述终端的近裁剪面裁剪掉的画面。
在上述实施例的基础上,可选地,画面补全模块设置为通过以下方式对当前显示画面进行补全:根据所述终端的视角点与所述虚拟模型的场景边缘所在平面之间的相对位置关系,确定目标填充画面;使用所述目标填充画面对当前显示画面进行补全操作。
在上述实施例的基础上,可选地,该装置还可以包括检测模块。
例如,检测模块设置为检测所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面;
在一实施例中,检测模块设置为通过以下方式检测终端的视角点穿过虚拟模型的场景边缘所在的平面:根据所述虚拟模型的场景边缘在上一帧和当前帧的世界坐标位置,及所述终端在当前帧的世界坐标位置,确定所述终端的视角点是否穿过所述虚拟模型的场景边缘所在平面的第一结果;根据所述终端在所述上一帧和所述当前帧的世界坐标位置,及所述虚拟模型的场景边缘在所述当前帧的世界坐标位置,确定所述终端的视角点是否穿过所述虚拟模型的场景边缘所在平面的第二结果;根据所述第一结果和所述第二结果中的至少之一,确定所述终端的视角点是否穿过所述虚拟模型的场景边缘所在平面的目标结果。
可选地,检测模块还设置为通过以下方式检测终端的视角点穿过虚拟模型的场景边缘所在的平面:虚拟模型的场景边缘扩展至完全覆盖所述展示画面。
下面参考图7,其示出了适于用来实现本公开实施例的电子设备700的结构示意图。本公开实施例中的电子设备可以包括诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等的移动终端以及诸如数字电视机(也即数字TV)、台式计算机等的固定终端。图7示出的电子设备仅仅是一个示例。
如图7所示,电子设备700可以包括处理装置(例如中央处理器、图形处理器等)701,处理装置可以根据存储在只读存储器(Read Only Memory,ROM)702中的程序或者从存储装置708加载到随机访问存储器(Random Access Memory,RAM)703中的程序而执行各种适当的动作和处理。在RAM 703中,还存储有电子设备700操作所需的各种程序和数据。处理装置701、ROM 702以及RAM 703通过总线704彼此相连。输入/输出(Input/Output,I/O)接口705也连接至总线704。
通常,以下装置可以连接至I/O接口705:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置706;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置707;包括例如磁带、硬盘等的存储装置708;以及通信装置709。通信装置709可以允许电子设备700与其他设备进行无线或有线通信以交换数据。虽然图7示出了具有各种装置的电子设备700,但是应理解的是,并不要求实施或具备所有示出的装置。
可以替代地实施或具备更多或更少的装置。
在一实施例中,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置709从网络上被下载和安装,或者从存储装置708被安装,或者从ROM702被安装。在该计算机程序被处理装置701执行时,执行本公开实施例的方法中限定的上述功能。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的组合。计算机可读存储介质例如可以是电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者以上的组合。计算机可读存储介质的示例可以包括:具有至少一个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(如电子可编程只读存储器(Electronic Programable Read Only Memory,EPROM)或闪存)、光纤、便携式紧凑磁盘只读存储器(Compact Disc-Read Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的合适的组合。在本公开中,计算机可读存储介质可以是包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括电磁信号、光信号或上述的合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用适当的介质传输,包括:电线、光缆、射频(Radio Frequency,RF)等,或者上述的合适的组合。
在一实施方式中,客户端、服务器可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有至少一个程序,当上述至少一个程序被该电子设备执行时,使得该电子设备:在终端展示实时采集的现实场景画面;其中,所述现实场景画面中包括有虚拟模型;在所述虚拟模型的场景边缘向终端方向
扩展的过程中,响应于检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面,控制所述终端的展示画面从现实场景切换到虚拟场景;获取针对所述虚拟场景下的互动指令,在所述终端展示与所述互动指令对应的互动效果。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括面向对象的程序设计语言诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络包括局域网(LAN)或广域网(WAN)连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含至少一个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由至少一个硬件逻辑部件来执行。例如,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field-Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的合适组合。机器可读存储介质的示例可以包
括基于至少一个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的合适组合。
在一个实施例中,提供了一种电子设备,包括存储器和处理器,存储器存储有计算机程序,该处理器执行计算机程序时实现以下步骤:
在终端展示实时采集的现实场景画面;其中,所述现实场景画面中包括有虚拟模型;
在所述虚拟模型的场景边缘向终端方向扩展的过程中,响应于检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面,控制所述终端的展示画面从现实场景切换到虚拟场景;
获取针对所述虚拟场景下的互动指令,在所述终端展示与所述互动指令对应的互动效果。
在一个实施例中,还提供了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
在终端展示实时采集的现实场景画面;其中,所述现实场景画面中包括有虚拟模型;
在所述虚拟模型的场景边缘向终端方向扩展的过程中,响应于检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面,控制所述终端的展示画面从现实场景切换到虚拟场景;
获取针对所述虚拟场景下的互动指令,在所述终端展示与所述互动指令对应的互动效果。
上述实施例中提供的虚拟现实交互装置、设备以及存储介质可执行本公开任意实施例所提供的虚拟现实交互方法,具备执行该方法相应的功能模块和有益效果。未在上述实施例中详尽描述的技术细节,可参见本公开任意实施例所提供的虚拟现实交互方法。
根据本公开的一个或多个实施例,提供一种虚拟现实交互方法,包括:
在终端展示实时采集的现实场景画面;其中,所述现实场景画面中包括有虚拟模型;
在所述虚拟模型的场景边缘向终端方向扩展的过程中,响应于检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面,控制所述终端的展示画面从现实场景切换到虚拟场景;
获取针对所述虚拟场景下的互动指令,在所述终端展示与所述互动指令对应的互动效果。
根据本公开的一个或多个实施例,提供了如上的虚拟现实交互方法,还包
括:检测所述终端的旋转运动数据;在所述终端展示所述虚拟场景下与所述旋转运动数据对应的虚拟场景画面。
可选地,所述虚拟场景画面中包括交互引导信息。
根据本公开的一个或多个实施例,提供了如上的虚拟现实交互方法,还包括:确定与所述旋转运动数据对应的虚拟场景下的第一目标对象;获取所述第一目标对象的深度信息;根据所述深度信息对所述第一目标对象进行渲染,在所述终端展示渲染结果。
根据本公开的一个或多个实施例,提供了如上的虚拟现实交互方法,还包括:响应于针对所述虚拟场景下的第一个第二目标对象的触发操作,在所述终端展示第一互动效果;响应于针对所述虚拟场景下的第N个第二目标对象的触发操作,在所述终端同步展示所述第一互动效果和第二互动效果;其中,所述第一互动效果与所述第二互动效果不同,N为大于1的自然数。
根据本公开的一个或多个实施例,提供了如上的虚拟现实交互方法,还包括:获取针对所述虚拟场景下的互动指令;获取当前虚拟场景画面及与所述互动指令对应的当前互动画面;其中,所述当前虚拟场景画面为被混合画面,所述当前互动画面为待混合画面;使用所述当前虚拟场景画面对所述当前互动画面进行混合操作,得到目标虚拟场景画面;在所述终端展示所述目标虚拟场景画面。
根据本公开的一个或多个实施例,提供了如上的虚拟现实交互方法,还包括:响应于再次检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面,控制所述终端的展示画面从所述虚拟场景切换到所述现实场景。
根据本公开的一个或多个实施例,提供了如上的虚拟现实交互方法,还包括:在所述终端的视角点与所述虚拟模型的场景边缘所在平面之间的距离满足预设阈值时,对当前显示画面进行补全操作,以填充被所述终端的近裁剪面裁剪掉的画面。
根据本公开的一个或多个实施例,提供了如上的虚拟现实交互方法,还包括:根据所述终端的视角点与所述虚拟模型的场景边缘所在平面之间的相对位置关系,确定目标填充画面;使用所述目标填充画面对当前显示画面进行补全操作。
根据本公开的一个或多个实施例,提供了如上的虚拟现实交互方法,还包括:根据所述虚拟模型的场景边缘在上一帧和当前帧的世界坐标位置,及所述终端在当前帧的世界坐标位置,确定所述终端的视角点是否穿过所述虚拟模型的场景边缘所在平面的第一结果;根据所述终端在所述上一帧和所述当前帧的世界坐标位置,及所述虚拟模型的场景边缘在所述当前帧的世界坐标位置,确定所述终端的视角点是否穿过所述虚拟模型的场景边缘所在平面的第二结果;根据所述第一结果和所述第二结果中的至少之一,确定所述终端的视角点是否穿过所述虚拟模型的场景边缘所在平面的目标结果。
根据本公开的一个或多个实施例,提供了如上的虚拟现实交互方法,还包括:所述虚拟模型的场景边缘扩展至完全覆盖所述展示画面。
Claims (14)
- 一种虚拟现实交互方法,包括:在终端展示实时采集的现实场景画面;其中,所述现实场景画面中包括有虚拟模型;在所述虚拟模型的场景边缘向终端方向扩展的过程中,响应于检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面,控制所述终端的展示画面从现实场景切换到虚拟场景;获取针对所述虚拟场景下的互动指令,在所述终端展示与所述互动指令对应的互动效果。
- 根据权利要求1所述的方法,其中,所述获取针对所述虚拟场景下的互动指令,在所述终端展示与所述互动指令对应的互动效果,包括:检测所述终端的旋转运动数据;在所述终端展示所述虚拟场景下与所述旋转运动数据对应的虚拟场景画面。
- 根据权利要求2所述的方法,其中,所述虚拟场景画面中包括交互引导信息。
- 根据权利要求2所述的方法,其中,所述在所述终端展示所述虚拟场景下与所述旋转运动数据对应的虚拟场景画面,包括:确定与所述旋转运动数据对应的虚拟场景下的第一目标对象;获取所述第一目标对象的深度信息;根据所述深度信息对所述第一目标对象进行渲染,在所述终端展示渲染结果。
- 根据权利要求1所述的方法,其中,所述获取针对所述虚拟场景下的互动指令,在所述终端展示与所述互动指令对应的互动效果,包括:响应于针对所述虚拟场景下的第一个第二目标对象的触发操作,在所述终端展示第一互动效果;响应于针对所述虚拟场景下的第N个第二目标对象的触发操作,在所述终端同步展示所述第一互动效果和第二互动效果;其中,所述第一互动效果与所述第二互动效果不同,N为大于1的自然数。
- 根据权利要求1至5中任一项所述的方法,其中,所述获取针对所述虚拟场景下的互动指令,在所述终端展示与所述互动指令对应的互动效果,包括:获取针对所述虚拟场景下的互动指令;获取当前虚拟场景画面及与所述互动指令对应的当前互动画面;其中,所述当前虚拟场景画面为被混合画面,所述当前互动画面为待混合画面;使用所述当前虚拟场景画面对所述当前互动画面进行混合操作,得到目标虚拟场景画面;在所述终端展示所述目标虚拟场景画面。
- 根据权利要求1至5中任一项所述的方法,所述方法还包括:响应于再次检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面,控制所述终端的展示画面从所述虚拟场景切换到所述现实场景。
- 根据权利要求7所述的方法,所述方法还包括:在所述终端的视角点与所述虚拟模型的场景边缘所在平面之间的距离满足预设阈值时,对当前显示画面进行补全操作,以填充被所述终端的近裁剪面裁 剪掉的画面。
- 根据权利要求8所述的方法,其中,所述对当前显示画面进行补全操作,包括:根据所述终端的视角点与所述虚拟模型的场景边缘所在平面之间的相对位置关系,确定目标填充画面;使用所述目标填充画面对当前显示画面进行补全操作。
- 根据权利要求1至5中任一项所述的方法,其中,检测终端的视角点穿过所述虚拟模型的场景边缘所在平面,包括:根据所述虚拟模型的场景边缘在上一帧和当前帧的世界坐标位置,及所述终端在当前帧的世界坐标位置,确定所述终端的视角点是否穿过所述虚拟模型的场景边缘所在平面的第一结果;根据所述终端在所述上一帧和所述当前帧的世界坐标位置,及所述虚拟模型的场景边缘在所述当前帧的世界坐标位置,确定所述终端的视角点是否穿过所述虚拟模型的场景边缘所在平面的第二结果;根据所述第一结果和所述第二结果中的至少之一,确定所述终端的视角点是否穿过所述虚拟模型的场景边缘所在平面的目标结果。
- 根据权利要求1至5中任一项所述的方法,其中,检测所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面,包括:所述虚拟模型的场景边缘扩展至完全覆盖所述展示画面。
- 一种虚拟现实交互装置,包括:第一展示模块,设置为在终端展示实时采集的现实场景画面;其中,所述 现实场景画面中包括有虚拟模型;第一控制模块,设置为在所述虚拟模型的场景边缘向终端方向扩展的过程中,在检测到所述终端的视角点穿过所述虚拟模型的场景边缘所在的平面时,控制所述终端的展示画面从现实场景切换到虚拟场景;第二展示模块,设置为获取针对所述虚拟场景下的互动指令,在所述终端展示与所述互动指令对应的互动效果。
- 一种电子设备,包括存储器和处理器,所述存储器存储有计算机程序,所述处理器执行所述计算机程序时实现如权利要求1至11中任一项所述的虚拟现实交互方法。
- 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1至11中任一项所述的虚拟现实交互方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210074599.8A CN114461064B (zh) | 2022-01-21 | 2022-01-21 | 虚拟现实交互方法、装置、设备和存储介质 |
CN202210074599.8 | 2022-01-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023138559A1 true WO2023138559A1 (zh) | 2023-07-27 |
Family
ID=81412369
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/072538 WO2023138559A1 (zh) | 2022-01-21 | 2023-01-17 | 虚拟现实交互方法、装置、设备和存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114461064B (zh) |
WO (1) | WO2023138559A1 (zh) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114461064B (zh) * | 2022-01-21 | 2023-09-15 | 北京字跳网络技术有限公司 | 虚拟现实交互方法、装置、设备和存储介质 |
CN115035239B (zh) * | 2022-05-11 | 2023-05-09 | 北京宾理信息科技有限公司 | 用于构建虚拟环境的方法、装置、计算机设备及车辆 |
CN115793864B (zh) * | 2023-02-09 | 2023-05-16 | 宏景科技股份有限公司 | 一种虚拟现实响应装置、方法及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190385371A1 (en) * | 2018-06-19 | 2019-12-19 | Google Llc | Interaction system for augmented reality objects |
CN111324253A (zh) * | 2020-02-12 | 2020-06-23 | 腾讯科技(深圳)有限公司 | 虚拟物品交互方法、装置、计算机设备及存储介质 |
CN112148189A (zh) * | 2020-09-23 | 2020-12-29 | 北京市商汤科技开发有限公司 | 一种ar场景下的交互方法、装置、电子设备及存储介质 |
CN112933606A (zh) * | 2021-03-16 | 2021-06-11 | 天津亚克互动科技有限公司 | 游戏场景转换方法及装置、存储介质、计算机设备 |
CN114461064A (zh) * | 2022-01-21 | 2022-05-10 | 北京字跳网络技术有限公司 | 虚拟现实交互方法、装置、设备和存储介质 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8089479B2 (en) * | 2008-04-11 | 2012-01-03 | Apple Inc. | Directing camera behavior in 3-D imaging system |
US9773341B2 (en) * | 2013-03-14 | 2017-09-26 | Nvidia Corporation | Rendering cover geometry without internal edges |
CN108337497B (zh) * | 2018-02-07 | 2020-10-16 | 刘智勇 | 一种虚拟现实视频/图像格式以及拍摄、处理、播放方法和装置 |
CN108805989B (zh) * | 2018-06-28 | 2022-11-11 | 百度在线网络技术(北京)有限公司 | 场景穿越的方法、装置、存储介质和终端设备 |
CN110163976B (zh) * | 2018-07-05 | 2024-02-06 | 腾讯数码(天津)有限公司 | 一种虚拟场景转换的方法、装置、终端设备及存储介质 |
CN109993823B (zh) * | 2019-04-11 | 2022-11-25 | 腾讯科技(深圳)有限公司 | 阴影渲染方法、装置、终端及存储介质 |
CN110275617A (zh) * | 2019-06-21 | 2019-09-24 | 姚自栋 | 混合现实场景的切换方法及系统、存储介质及终端 |
RU2733161C1 (ru) * | 2020-02-28 | 2020-09-29 | Сергей Дарчоевич Арутюнов | Способ изготовления съемного зубного протеза |
CN112215966A (zh) * | 2020-10-13 | 2021-01-12 | 深圳市齐天智能方案设计有限公司 | 一种虚拟图像与现实的用户照片合影方法 |
CN112767531B (zh) * | 2020-12-30 | 2022-04-29 | 浙江大学 | 面向移动端的虚拟试衣的人体模型脸部分区域建模方法 |
-
2022
- 2022-01-21 CN CN202210074599.8A patent/CN114461064B/zh active Active
-
2023
- 2023-01-17 WO PCT/CN2023/072538 patent/WO2023138559A1/zh unknown
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190385371A1 (en) * | 2018-06-19 | 2019-12-19 | Google Llc | Interaction system for augmented reality objects |
CN111324253A (zh) * | 2020-02-12 | 2020-06-23 | 腾讯科技(深圳)有限公司 | 虚拟物品交互方法、装置、计算机设备及存储介质 |
CN112148189A (zh) * | 2020-09-23 | 2020-12-29 | 北京市商汤科技开发有限公司 | 一种ar场景下的交互方法、装置、电子设备及存储介质 |
CN112933606A (zh) * | 2021-03-16 | 2021-06-11 | 天津亚克互动科技有限公司 | 游戏场景转换方法及装置、存储介质、计算机设备 |
CN114461064A (zh) * | 2022-01-21 | 2022-05-10 | 北京字跳网络技术有限公司 | 虚拟现实交互方法、装置、设备和存储介质 |
Non-Patent Citations (1)
Title |
---|
"Master's Thesis", 12 April 2009, SHANDONG UNIVERSITY, CN, article LIU, MU: "Research and Implementation of Key Technology in Real-time 3D Roaming System", pages: 1 - 57, XP009547774 * |
Also Published As
Publication number | Publication date |
---|---|
CN114461064B (zh) | 2023-09-15 |
CN114461064A (zh) | 2022-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023138559A1 (zh) | 虚拟现实交互方法、装置、设备和存储介质 | |
US20230360337A1 (en) | Virtual image displaying method and apparatus, electronic device and storage medium | |
EP4044606B1 (en) | View adjustment method and apparatus for target device, electronic device, and medium | |
WO2023179346A1 (zh) | 特效图像处理方法、装置、电子设备及存储介质 | |
CN112051961A (zh) | 虚拟交互方法、装置、电子设备及计算机可读存储介质 | |
CN112672185B (zh) | 基于增强现实的显示方法、装置、设备及存储介质 | |
CN112965780B (zh) | 图像显示方法、装置、设备及介质 | |
WO2023138548A1 (zh) | 图像处理方法、装置、设备和存储介质 | |
US20240345706A1 (en) | Control display method and apparatus, electronic device, storage medium, and program product | |
US11869195B2 (en) | Target object controlling method, apparatus, electronic device, and storage medium | |
WO2022183887A1 (zh) | 视频编辑及播放方法、装置、设备、介质 | |
CN110070617B (zh) | 数据同步方法、装置、硬件装置 | |
CN111862342B (zh) | 增强现实的纹理处理方法、装置、电子设备及存储介质 | |
CN113589926A (zh) | 虚拟界面操作方法、头戴式显示设备和计算机可读介质 | |
CN111897437A (zh) | 跨终端的交互方法、装置、电子设备以及存储介质 | |
US20220319062A1 (en) | Image processing method, apparatus, electronic device and computer readable storage medium | |
US20230199262A1 (en) | Information display method and device, and terminal and storage medium | |
JP7560207B2 (ja) | オブジェクトの表示方法、装置、電子機器及びコンピュータ可読型記憶媒体 | |
RU2802724C1 (ru) | Способ и устройство обработки изображений, электронное устройство и машиночитаемый носитель информации | |
US20240284038A1 (en) | Photographing guiding method and apparatus, and electronic device and storage medium | |
US20240269553A1 (en) | Method, apparatus, electronic device and storage medium for extending reality display | |
US20240153159A1 (en) | Method, apparatus, electronic device and storage medium for controlling based on extended reality | |
US20240223849A1 (en) | Live-streaming video stream playing method and apparatus, and electronic device and storage medium | |
CN111324404B (zh) | 信息获取进度的显示方法、装置、电子设备及可读介质 | |
CN114417204A (zh) | 信息生成方法、装置和电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 23742874 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |