CN107479699A - Virtual reality exchange method, apparatus and system - Google Patents
Virtual reality exchange method, apparatus and system Download PDFInfo
- Publication number
- CN107479699A CN107479699A CN201710632426.2A CN201710632426A CN107479699A CN 107479699 A CN107479699 A CN 107479699A CN 201710632426 A CN201710632426 A CN 201710632426A CN 107479699 A CN107479699 A CN 107479699A
- Authority
- CN
- China
- Prior art keywords
- user
- virtual scene
- information
- virtual
- heat transfer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/005—General purpose rendering architectures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/01—Indexing scheme relating to G06F3/01
- G06F2203/012—Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Human Computer Interaction (AREA)
- Information Transfer Between Computers (AREA)
- Processing Or Creating Images (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention is applied to technical field of virtual reality, there is provided a kind of virtual reality exchange method, apparatus and system, the exchange method include:The first image information of the first user and the first heat transfer agent of the first user from collector are received, first position information of first user in virtual scene described in described first image message identification;First user behavior of first user in virtual scene is determined according to first heat transfer agent;Virtual scene is rendered according to the Viewing-angle information of the first position information, first user behavior and first user and is shown to first user.Apply the technical scheme of the present invention, can solve the problem that the problems such as interbehavior is not unnatural, vivid in existing virtual reality exchange method implementation process.
Description
Technical field
The invention belongs to technical field of virtual reality, more particularly to a kind of virtual reality exchange method, device and system.
Background technology
Due to the development of virtual reality personal entertainment device and developing rapidly for Computer Aided Graphic Design performance, more people are virtual
Real interactive or amusement demand is also more and more urgent.
At present, the general principle of existing virtual reality interaction is:Obtain user three-dimensional space position, then according to
The three-dimensional space position at family is mapped in virtual scene, to obtain positional information of the user in virtual scene, and according to this
Positional information progress respective virtual scene renders and is shown to user.In existing virtual reality interaction, there is friendship
Mutual behavior not enough naturally, image, it is lively the problems such as, as user's operation behavior can not be accurately reflected in virtual scene.Therefore, have
Necessity is improved to existing virtual reality interaction technique.
The content of the invention
In view of this, the embodiments of the invention provide a kind of virtual reality exchange method, apparatus and system, it is intended to solves existing
There is the problems such as interbehavior is not unnatural, vivid in virtual reality interaction implementation process.
In consideration of it, the embodiments of the invention provide a kind of virtual reality exchange method, for real user and virtual scene
Between interaction, including:
The first image information of the first user and the first heat transfer agent of the first user from collector are received, it is described
First position information of first user described in first image information identifier in virtual scene;
First user behavior of first user in virtual scene is determined according to first heat transfer agent;
Void is rendered according to the Viewing-angle information of the first position information, first user behavior and first user
Intend scene and be shown to first user.
Wherein, it is described that first user row of first user in virtual scene is determined according to first heat transfer agent
For, including:
According to the heat transfer agent, the attitude information of identification first user;
Judge whether the attitude information meets preparatory condition;
If, it is determined that user behavior of first user in virtual scene is user corresponding to the preparatory condition
Behavior.
Wherein, the preparatory condition includes:Distance threshold condition, bone bending threshold condition, bone stretching, extension threshold value and
It is at least one in threshold speed condition.
Wherein, before the first image information of the reception and the first heat transfer agent from collector, the interaction
Method also includes:
Define user behavior corresponding to the preparatory condition;
The user behavior includes:Stretching, flexure operation or shaking motion.
Wherein, the first image information of the reception, including:
Receive the first position information that is described dynamic catching camera collection and being transmitted by camera server.
Wherein, when real user is at least two, described according to the first position information, the first user row
For and first user Viewing-angle information render virtual scene and be shown to first user before, methods described is also wrapped
Include:
First heat transfer agent is uploaded to virtual scene server;And
Receive second user behavior and the second place information for the second user that the virtual scene server transmits;Second
User behavior is determined according to the second heat transfer agent of the second user;
The Viewing-angle information wash with watercolours according to the first position information, first user behavior and first user
Dye virtual scene is simultaneously shown to first user, including:
According to the first position information, second place information, first user behavior, second user behavior and institute
The Viewing-angle information for stating the first user renders virtual scene and is shown to first user.
Correspondingly, present invention also offers a kind of virtual reality interactive device, between real user and virtual scene
Interaction, including:Memory, processor and it is stored in the computer that can be run in the memory and on the processor
Program, described in the computing device during computer program the step of any of the above-described virtual reality exchange method.
The embodiment of the present invention additionally provides a kind of virtual reality interactive system, and the system includes:Image collecting device, adopt
Storage, client and Helmet Mounted Display;
Described image harvester, for gathering the first image information and being transmitted to the client;Described first image is believed
Breath mark:First position information of first user in virtual scene;
The collector, for gathering the first heat transfer agent of the first user and being transmitted to the client;
The client, for determining first of first user in virtual scene according to first heat transfer agent
User behavior, and the Viewing-angle information according to the first position information, first user behavior and first user
Render virtual scene and first user is shown to by the Helmet Mounted Display.
Wherein, to be specially that optics is dynamic catch image collecting device with described image harvester, including:At least two dynamic catch phase
Machine, and camera server;
Described move catches camera, for gathering the first image information of first user and being transferred to the camera service
Device;
The camera server is used to give described first image information transfer to the client.
Wherein, when real user is at least two, the system also includes:Virtual scene server;
The client is used to first heat transfer agent being uploaded to virtual scene server;And receive described virtual
The second user behavior for the second user that scene server transmits and second place information;Second user behavior is according to described
What the second heat transfer agent of two users determined;And according to the first position information, second place information, the first user row
Viewing-angle information for, second user behavior and first user renders virtual scene, and is shown by the Helmet Mounted Display
Show to first user.
Existing beneficial effect is the embodiment of the present invention compared with prior art:
Positional information of the user in virtual scene is determined by the dynamic image information for catching camera, and gathered by collector
Heat transfer agent determine user behavior of the user in virtual scene, can render suitable user with reference to user perspective information regards
The virtual scene of angle viewing.Compared with prior art, client combines to move and catches data and biography when progress virtual scene renders
Feel data, and especially highlight and user behavior is rendered, therefore the virtual reality exchange method of the present invention will accurately can be used
The operational feedback at family into virtual scene, strengthen the tractability of user behavior in virtual reality interaction, iconicity and
Vividness.
Brief description of the drawings
Technical scheme in order to illustrate the embodiments of the present invention more clearly, below will be to embodiment or description of the prior art
In the required accompanying drawing used be briefly described, it should be apparent that, drawings in the following description be only the present invention some
Embodiment, for those of ordinary skill in the art, without having to pay creative labor, can also be according to these
Accompanying drawing obtains other accompanying drawings.
Fig. 1 is the schematic flow sheet of the first embodiment of virtual reality exchange method provided by the invention;
Fig. 2 is the schematic flow sheet of the second embodiment of virtual reality exchange method provided by the invention;
Fig. 3 is the schematic diagram of the embodiment of interface alternation provided by the invention;
Fig. 4 is the schematic flow sheet of the 3rd embodiment of virtual reality exchange method provided by the invention;
Fig. 5 is the structured flowchart of the embodiment of virtual reality interactive device provided by the invention;
Fig. 6 is the structural representation of the embodiment of virtual reality interactive system provided by the invention.
Embodiment
In describing below, in order to illustrate rather than in order to limit, it is proposed that such as tool of particular system structure, technology etc
Body details, thoroughly to understand the embodiment of the present invention.However, it will be clear to one skilled in the art that there is no these specific
The present invention can also be realized in the other embodiments of details.In other situations, omit to well-known system, device, electricity
Road and the detailed description of method, in case unnecessary details hinders description of the invention.
Before addressing detailed embodiments of the invention, first virtual reality interaction flow is briefly described, in order to
More fully understand the present invention.The interaction flow of virtual reality is usually:Obtain the dynamic of user and catch data (three-dimensional space position), root
Catch data according to dynamic and determine positional information of the user in virtual scene, then according to the heat transfer agent and use of the user received
Positional information of the family in virtual scene, corresponding interaction response is carried out, and by response results simultaneous display to user.Will response
As a result the mode of operation for being shown to user is usually:According to response results and the visual angle of user, the tune of progress respective virtual scene
It is whole, the helmet that the virtual scene after adjustment is worn by user is then shown to user.
It should be noted is that during virtual interacting, the dynamic acquisition mode for catching data of user can have it is a variety of,
Such as inertia it is dynamic catch, laser is dynamic catches or optics is dynamic catches, in the follow-up embodiment of the present invention, will by optics is dynamic catch exemplified by be said
It is bright.
The virtual reality interaction of the embodiment of the present invention is specifically based on the dynamic virtual reality interaction caught of optics.Based on optics
In the dynamic virtual reality interaction for catching technology, the observed object (1 of optical imaging system (multiple move catches camera) identification can be utilized
Or it is more personal) on the optical markings point that adheres to, the coordinate of mark point is calculated by the dynamic image capturing system processing for catching camera
Positional information, then it is transferred to the dynamic server (abbreviation camera server) for catching camera through network (wired, wirelessly, USB etc.).Phase
Machine server receives the next automatic co-ordinate position information for catching camera, and (co-ordinate position information is position of the user in physics scene
Information), observed object is identified according to the location coordinate information, the positional information of user is obtained, realizes and user is positioned.
It is understood that if camera server will position to user, then the image information of the same user received is necessary
Come from two and different dynamic catch camera.
Specifically, camera server determines three of a certain user in physics scene according to the location coordinate information received
After tieing up positional information, the three dimensional local information is also sent to the virtual scene client of corresponding user (when in virtual interacting
During an only user, virtual scene client is also virtual scene server simultaneously;When there is multiple users in virtual interacting,
Virtual scene client and virtual scene server separate, and virtual scene server is used to control each virtual scene client
Work-based logic).I other words the camera that is to catch that virtual scene client receives gathers, after camera server is handled
Three dimensional local information of the user in physics scene.The three dimensional local information can be mapped to virtual field by virtual scene client
A certain position in scape, it may thereby determine that positional information of the user in virtual scene.When there is multiple use in virtual interacting
During family, the positional information can be uploaded to by virtual scene client it is determined that after positional information of the user in virtual scene
Virtual scene server.
Illustrated below by specific embodiment.
Fig. 1 shows the schematic flow sheet of the first embodiment of virtual reality exchange method provided by the invention, in order to just
In explanation, the part related to the embodiment of the present invention is illustrate only, details are as follows:
Step 101, receive to catch the image information of camera and the heat transfer agent from collector, the image information automatically
Identify positional information of the user in virtual scene.
When there was only a user in virtual interacting, due to the mutual information transmission of user is now not present, therefore can
So that without the server for still further introducing virtual scene, (in other words, now server and client side's conjunction two of virtual scene is
One).When there is two or more users in virtual interacting, the information transmission mutual due to user now be present, therefore need
The server of virtual scene is introduced separately into, to realize the information transmission between each user.In embodiments of the present invention, with void
Intend illustrating exemplified by an only user in interaction.
The executive agent of the present embodiment can be the client of virtual scene.In virtual reality interaction, virtual scene visitor
The quantity at family end is identical with number of users.In existing virtual reality interaction, virtual scene is mainly based on game.It is appreciated that
Be that the virtual scene of the embodiment of the present invention is not limited to play, can also be the virtual scene in other application field, as studio,
Educational training, military exercises etc..
The image information received in this step is specifically three dimensional local information of the user in physics scene, according to physics
Corresponding relation between scene and virtual scene, you can some position for mapping to the three dimensional local information in virtual scene,
So as to obtain positional information of the user in virtual scene.
In addition, collector is specifically as follows inertial navigation unit such as gyroscope, it is attached to user, passes through gyroscope
, can be by wired or wireless such as bluetooth after the heat transfer agent (heat transfer agent includes speed and acceleration information) for obtaining user
Mode is sent to virtual scene client corresponding to the user, the client of the corresponding virtual scene of a user.Wherein, pass
Sense information can include the speed and acceleration information of all users, and acceleration information can be specially six-axle acceleration.
Step 102, user behavior of the user in virtual scene is determined according to heat transfer agent.
Client can determine user behavior of the user in virtual scene according to the heat transfer agent received.Wherein use
Family behavior is the various actions of user, such as:Stretching, flexure operation, click action or shaking motion of user etc..When
So, other action behaviors of user can also be included.It can be carried on the back and existed by user and client can be knapsack main frame, during use
Back, such user can break away from the constraint of traditional wire rod when carrying out virtual interacting, extend activity space.
Step 103, virtual scene is rendered according to the positional information, user behavior and the Viewing-angle information of the user and shown
Give the user.
It can be seen from description above, the corresponding client of a user (user carries on the back a knapsack main frame).Visitor
Family end is receiving the positional information and user behavior and then the Viewing-angle information for combining user in virtual scene of user,
The helmet that the game virtual scene of suitable user perspective can be rendered and worn by user is shown to user.
The virtual reality exchange method of the embodiment of the present invention, client determine that user exists by the dynamic image information for catching camera
Positional information in virtual scene, and the heat transfer agent gathered by collector determines user row of the user in virtual scene
For the virtual scene that suitable user perspective watches can be rendered with reference to user perspective information.Compared with prior art, client
When progress virtual scene renders, combine to move and catch data and sensing data, and especially highlight and user behavior is rendered, because
This virtual reality exchange method of the invention can strengthen virtual reality and hand over accurately by the operational feedback of user into virtual scene
Tractability, iconicity and the vividness of user behavior during mutually.
It should be noted that because user is persistent movement in interaction, therefore system also needs to gather next frame
The heat transfer agent of image information and subsequent time, and obtain the positional information and user behavior of user's subsequent time, and according to
The real-time motion state in family upgrades in time virtual scene,, can be with so after step 103 is performed to realize the feeling of immersion of interaction
Return continues executing with step 101.
In addition, it can be seen from description above, client needs to determine user in virtual scene according to heat transfer agent
User behavior.So it is specifically as follows in execution step 102:According to heat transfer agent, the attitude information of user is identified, then basis
Attitude information determines user behavior of the user in virtual scene.Below, will be described in detail by Fig. 2 embodiment.
Fig. 2 is the schematic flow sheet of the second embodiment of virtual reality exchange method provided by the invention, for the ease of saying
It is bright, the part related to the embodiment of the present invention is illustrate only, details are as follows:
Step 201, receive to catch the image information of camera and the heat transfer agent from collector, the image information automatically
Identify positional information of the user in virtual scene.
Step 202, according to heat transfer agent, the attitude information of user is identified.
Because heat transfer agent includes the speed and acceleration information of user, therefore according to the speed and acceleration information,
The attitude information of the user can be identified.
Step 203, whether the attitude information identified in judgment step 202 meets preparatory condition.
Step 204, if the determination result is YES, it is determined that user behavior of the user in virtual scene is corresponding for preparatory condition
User behavior;And enter step 205.
Step 205, virtual scene is rendered according to positional information, user behavior and the Viewing-angle information of user and is shown to use
Family.
Step 206, if judged result is no, virtual field is rendered according to the Viewing-angle information of the positional information and user
Scape is simultaneously shown to user.
In step 203, preparatory condition can be default various threshold conditions in virtual reality interaction, for sentencing
Whether the operation of disconnected user meets certain behavior act.Wherein, preparatory condition can be:Distance threshold condition, bone bending threshold
Value condition, bone stretching, extension one or more of threshold value and threshold speed condition.On judging whether attitude information meets
One or more of threshold condition is stated, the behavior act of user can be identified.
For example, if preparatory condition is:The distance between thumb and middle finger are less than predetermined threshold value and finger bone is bent
More than predetermined threshold value, then action of the user behavior corresponding to the preparatory condition for bending finger is defined.Now, if being believed according to posture
Breath determines the thumb of user and the distance between middle finger satisfaction meets to be more than less than predetermined threshold value and the flexibility of finger bone
During predetermined threshold value, it is determined that user behavior of the user in virtual scene is the action of bending finger, such as holds virtual field
The behavior of some object in scape.When so rendering virtual scene in step 205, then need according to positional information, user behavior
And the Viewing-angle information of user renders virtual scene (virtual scene includes user and holds the dynamic of some object in virtual scene
Make) and user is shown to, as shown in Figure 3.
It is understood that the example of above-mentioned identification user behavior is not exhaustive, in actual use, can be according to reality
The characteristics of demand analysis user behavior, and range information, bone bending information, bone are stretched corresponding to the preset in advance user behavior
The threshold condition of information and velocity information is opened up, to reach the purpose that user behavior is identified.
The virtual reality exchange method of the embodiment of the present invention, client receive the heat transfer agent from collector it
Afterwards, the attitude information of user is identified according to heat transfer agent;And by judging whether the attitude information meets preparatory condition to know
Other user behavior.After user behavior is identified, can according to positional information of the user in virtual scene, user behavior and
The virtual scene of suitable user perspective viewing can be rendered with reference to user perspective information.Compared with prior art, client exists
When progress virtual scene renders, combine to move and catch data and sensing data, and especially highlight and user behavior is rendered, therefore
The virtual reality exchange method of the present invention can strengthen virtual reality interaction accurately by the operational feedback of user into virtual scene
During user behavior tractability, iconicity and vividness.And it is unidentified arrive user behavior when, can be according to user
Positional information and user perspective information in virtual scene can render the virtual scene of suitable user perspective viewing, together
Sample can also reach the purpose of virtual interacting.
In above-mentioned two embodiment, mainly describe one and realize the scene of virtual interacting (i.e. in virtual interacting only
The situation of one user), but in the virtual interacting application of reality, the often more than one of virtual interacting is carried out in scene
People.Therefore, it is also desirable to consider the scene of more people's virtual interactings.In more people's interaction scenarios, client is right except to identify itself
Using the user behavior at family, it is also necessary to know the user behavior of other users in scene.Meanwhile client is except right to itself
The user answered is positioned, while also needs to obtain the positional information of other users in scene.Client is in scene is obtained
After the positional information and user behavior of all users, virtual scene corresponding with reality can be rendered completely.
In other words, on the basis of above-mentioned single virtual interacting, client also need to carry out the transmission of some data with
Processing, the positional information and user behavior of other users in scene could be obtained.It should be noted that in virtual scene
All clients, the operation principle and flow of each client are similar, are carried out below with the workflow of a client
Explanation.
As shown in figure 4, be the schematic flow sheet of the 3rd embodiment of virtual reality exchange method provided by the invention, in order to
It is easy to illustrate, illustrate only the part related to the embodiment of the present invention, details are as follows:
Step 401, receive to catch the first image information of camera and the first heat transfer agent from collector automatically, should
First position information of first the first user of image information identifier in virtual scene.
The mode of operation of this step is similar with the operation of corresponding steps in above-mentioned two embodiment, will not be described here.
Step 402, according to the first heat transfer agent, the attitude information of the first user of identification.
Because the first heat transfer agent includes the speed and acceleration information of the first user, therefore, according to the speed and add
Velocity information, the attitude information of first user can be identified.
Step 403, first user behavior of first user in virtual scene is determined according to the first heat transfer agent;This step
Operation can be realized by the step 202 in Fig. 2 embodiments to step 204, repeat no more.
Step 404, the first heat transfer agent is uploaded to virtual scene server, and receives virtual scene server and transmit
Second user second user behavior and second place information.Second place information is position of the second user in virtual scene
Confidence ceases.
The purpose of this step is:By virtual scene server, the heat transfer agent of all users, position in virtual scene are realized
Confidence breath renewal with it is synchronous.It is it is understood that clearly successively not suitable with step 502 to step 503 in this step
Sequence, they can synchronously perform, and can also successively perform.
It should be noted that the second user behavior received is determined according to the second heat transfer agent of second user.
Second heat transfer agent is the heat transfer agent of the second user of collector collection.Collect second user the second heat transfer agent it
Afterwards, the second heat transfer agent is transmitted to client corresponding to second user.Now, client can be known according to second heat transfer agent
The second user behavior of identification is simultaneously uploaded to virtual scene server by the second user behavior of other second user.Certainly, client
End can also transmit second heat transfer agent to virtual scene server, then by virtual scene server come according to this second
Heat transfer agent identifies the second user behavior of second user.The mode of user behavior by the agency of above is identified, is not gone to live in the household of one's in-laws on getting married herein
State.In addition, the client of second user also needs to the second place synchronizing information of second user to give virtual scene server.With
Just the positional information of second user and user behavior can be synchronized to the client of other users by virtual scene server, so as to
User behavior is synchronous with positional information before realizing each client and renewal.
Step 405, according to first position information, second place information, the first user behavior, second user behavior and
The Viewing-angle information of one user renders virtual scene and is shown to the first user.
When the virtual reality exchange method of the embodiment of the present invention, client render the scene of more people interaction, it is necessary to according to the
First position information of one user in virtual scene, second place information of the second user in virtual scene, the first user
The first user behavior, the second user behavior of second user, and combine the Viewing-angle information of the first user is corresponding to carry out
Virtual scene renders.Compared with prior art, client combines to move and catches data and sensing when progress virtual scene renders
Data, and especially highlight and each user behavior in scene is rendered, therefore the virtual reality exchange method of the present invention can
Accurately by the operational feedback of each user into virtual scene, strengthen virtual reality interaction in user behavior it is fine
Property, iconicity and vividness.
Virtual reality exchange method is described in detail above three embodiment, below in conjunction with accompanying drawing, to adopting
It is described in detail with the device of above-mentioned virtual reality exchange method, it is necessary to illustrate, description is with determining as described in some terms
Justice, if detailed description has been carried out in virtual reality exchange method, it will not be described in great detail in device embodiment.
In order to realize above-mentioned virtual reality exchange method, the embodiment of the present invention additionally provides a kind of virtual reality interaction dress
Put, as shown in figure 5, the interactive device 500 includes:Memory 501, processor 502 and it is stored in the memory and can
The computer program 503 run on the processor.
Wherein, processor 502 realizes following steps when performing the computer program 503:
Receive the first image information and the first heat transfer agent of the first user from collector, described first image letter
Breath identifies first position information of first user in virtual scene;
First user behavior of first user in virtual scene is determined according to first heat transfer agent;
Void is rendered according to the Viewing-angle information of the first position information, first user behavior and first user
Intend scene and be shown to first user.
Interactive device 500 provided in an embodiment of the present invention, position of the user in virtual scene is determined by image information
Information, and the heat transfer agent gathered by collector determines user behavior of the user in virtual scene, believes with reference to user perspective
Breath can render the virtual scene of suitable user perspective viewing.Compared with prior art, client is carrying out virtual scene wash with watercolours
During dye, combine to move and catch data and sensing data, and especially highlight and user behavior is rendered, therefore the present invention's is virtual existing
Real exchange method can strengthen user's row in virtual reality interaction accurately by the operational feedback of user into virtual scene
For tractability, iconicity and vividness.
It should be noted that processor 502 is determining first user in virtual field according to first heat transfer agent
During the first user behavior in scape, following steps are specifically performed:
According to the heat transfer agent, the attitude information of identification first user;
Judge whether the attitude information meets preparatory condition;
If, it is determined that user behavior of first user in virtual scene is user corresponding to the preparatory condition
Behavior.Preparatory condition includes:Distance threshold condition, bone bending threshold condition, bone stretching, extension threshold value and threshold speed condition
In it is at least one.
Wherein, the processor 502 receives the first image information and the first heat transfer agent from collector performing
The step of before, also execute the following steps:Define user behavior corresponding to the preparatory condition;The user behavior includes:Stretch
Exhibition action, flexure operation or shaking motion.
Wherein, the processor 502 is performing the first image information of reception, specific to perform:Receive and described dynamic catch camera and adopt
The first position information that is collecting and being transmitted by camera server.
Wherein, when real user is at least two, processor 502 is being performed according to the first position information, described
The step of first user behavior and the Viewing-angle information of first user render virtual scene and are shown to first user
Before, also execute the following steps:
First heat transfer agent is uploaded to virtual scene server;And
Receive second user behavior and the second place information for the second user that the virtual scene server transmits;Second
User behavior is determined according to the second heat transfer agent of the second user.
Processor 502 is being performed according to the first position information, first user behavior and first user
Viewing-angle information when rendering virtual scene and being shown to the step of first user, it is specific to perform:
According to the first position information, second place information, first user behavior, second user behavior and institute
The Viewing-angle information for stating the first user renders virtual scene and is shown to first user.
The virtual reality interactive device of the embodiment of the present invention, after the heat transfer agent from collector is received, according to
Heat transfer agent identifies the attitude information of user;And by judging whether the attitude information meets preparatory condition to identify user's row
For., can be according to positional information of the user in virtual scene, user behavior and with reference to user after user behavior is identified
Viewing-angle information can render the virtual scene of suitable user perspective viewing.Compared with prior art, virtual scene wash with watercolours is being carried out
During dye, combine to move and catch data and sensing data, and especially highlight and user behavior is rendered, therefore the present invention's is virtual existing
Real interactive device can strengthen user's row in virtual reality interaction accurately by the operational feedback of user into virtual scene
For tractability, iconicity and vividness.And it is unidentified arrive user behavior when, can be according to user in virtual scene
Positional information and user perspective information can render the virtual scene of suitable user perspective viewing, equally can also reach virtual
Interactive purpose.
Correspondingly, the embodiment of the present invention additionally provides a kind of virtual reality interactive system, as shown in fig. 6, the interactive system
600 include:Image collecting device 601, collector 602, client 603 and Helmet Mounted Display 604.
Wherein, image collecting device 601, for gathering the first image information and being transmitted to client 603;Described first image
Message identification:First position information of first user in virtual scene.
Collector 602, for gathering the first heat transfer agent of the first user and being transmitted to the client 603.
Client 603, then it is used to determine first user in virtual scene according to the first heat transfer agent received
The first user behavior, and regarding according to the first position information, first user behavior and first user
Angle information renders virtual scene and is shown to first user by the Helmet Mounted Display 604.
During specific implementation, image collecting device 601 can be based on the dynamic volume image collecting device caught of optics, specifically can be with
Including:Dynamic camera of catching catches camera server with dynamic.Wherein, move catch camera as study as system identification be observed object (1 or
It is more personal) on the optical markings point that adheres to, and calculate by the dynamic image capturing system processing for catching camera the coordinate of mark point
Positional information, then it is transferred to the dynamic server (abbreviation camera server) for catching camera through network (wired, wirelessly, USB etc.).Phase
Machine server receives the next automatic co-ordinate position information for catching camera, and (co-ordinate position information is position of the user in physics scene
Information), observed object is identified according to the location coordinate information, the positional information of user is obtained, realizes and user is positioned.
Specifically, camera server determines that three-dimensional position of a certain user in physics scene is believed according to the location coordinate information received
After breath, the three dimensional local information is also sent to the client 603 of corresponding user.
In the specific implementation, collector 602 for example can be:Inertial navigation unit such as gyroscope, it is attached to user,
After the heat transfer agent (heat transfer agent includes speed and acceleration information) that user is obtained by gyroscope, wired or nothing can be passed through
Line virtual scene client as corresponding to the mode of bluetooth is sent to the user, the client of the corresponding virtual scene of a user
End.Wherein, heat transfer agent can include the speed and acceleration information of all users, and acceleration information can be specially six axles
Acceleration.
It can be seen from description above, the corresponding client of a user (user carries on the back a knapsack main frame).Visitor
Family end is receiving the positional information and user behavior and then the Viewing-angle information for combining user in virtual scene of user,
The helmet that the game virtual scene of suitable user perspective can be rendered and worn by user is shown to user.
The virtual reality interactive system of the embodiment of the present invention, client determine that user exists by the dynamic image information for catching camera
Positional information in virtual scene, and the heat transfer agent gathered by collector determines user row of the user in virtual scene
For the virtual scene that suitable user perspective watches can be rendered with reference to user perspective information.Compared with prior art, client
When progress virtual scene renders, combine to move and catch data and sensing data, and especially highlight and user behavior is rendered, because
This virtual reality interactive system of the invention can strengthen virtual reality and hand over accurately by the operational feedback of user into virtual scene
Tractability, iconicity and the vividness of user behavior during mutually.
It is above-mentioned mainly describe one realize virtual interacting scene (i.e. in virtual interacting only have a user feelings
Condition), but in the virtual interacting application of reality, the often more than one people of progress virtual interacting in scene.Therefore, it is also desirable to
Consider the scene of more people's virtual interactings.
It should be noted that when there was only a user in virtual interacting, due to the mutual letter of user is now not present
Breath transmits, therefore can not have to server (in other words, the now server of virtual scene and the visitor for still further introducing virtual scene
Family end is combined into one).When there are two or more users in virtual interacting, passed due to the mutual information of user now be present
Pass, it is therefore desirable to the server of virtual scene is introduced separately into, to realize the information transmission between each user.Therefore, realizing
During more people's interactions, virtual reality interactive system 600 also includes:Virtual scene server 605.Also, because a user is corresponding
One client 603, a user use a Helmet Mounted Display 602, then when there is multiple users, the quantity of client and
The quantity of Helmet Mounted Display also can accordingly increase.
In more people's interaction scenarios, user behavior of the client except to identify itself corresponding user, it is also necessary to know field
The user behavior of other users in scape.Meanwhile client is except will position user corresponding to itself, while also need to obtain
Take the positional information of other users in scene.Client in scene is obtained the positional information of all users and user behavior it
Afterwards, virtual scene corresponding with reality can be rendered completely.For all clients in virtual scene, each client
The operation principle and flow at end are similar, are illustrated below with the workflow of a client.
On the basis of above-mentioned single interaction, client 603 also needs to the first heat transfer agent being uploaded to virtual scene clothes
Business device 605;And receive second user behavior and the second confidence of the second user that the virtual scene server 605 transmits
Breath;Second user behavior is determined according to the second heat transfer agent of the second user;And according to the first position information,
Second place information, first user behavior, second user behavior and the Viewing-angle information of first user render virtually
Scene, and first user is shown to by the Helmet Mounted Display 604.
When the virtual reality interactive system of the embodiment of the present invention, client render the scene of more people interaction, it is necessary to according to the
First position information of one user in virtual scene, second place information of the second user in virtual scene, the first user
The first user behavior, the second user behavior of second user, and combine the Viewing-angle information of the first user is corresponding to carry out
Virtual scene renders.Compared with prior art, client combines to move and catches data and sensing when progress virtual scene renders
Data, and especially highlight and each user behavior in scene is rendered, therefore the virtual reality interactive system of the present invention can
Accurately by the operational feedback of each user into virtual scene, strengthen virtual reality interaction in user behavior it is fine
Property, iconicity and vividness.
Those of ordinary skill in the art are it is to be appreciated that the list of each example described with reference to the embodiments described herein
Member and algorithm steps, it can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
Performed with hardware or software mode, application-specific and design constraint depending on technical scheme.Professional and technical personnel
Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed
The scope of the present invention.
In embodiment provided by the present invention, it should be understood that disclosed device/terminal device and method, can be with
Realize by another way.For example, device described above/terminal device embodiment is only schematical, for example, institute
The division of module or unit is stated, only a kind of division of logic function, there can be other dividing mode when actually realizing, such as
Multiple units or component can combine or be desirably integrated into another system, or some features can be ignored, or not perform.Separately
A bit, shown or discussed mutual coupling or direct-coupling or communication connection can be by some interfaces, device
Or INDIRECT COUPLING or the communication connection of unit, can be electrical, mechanical or other forms.
The unit illustrated as separating component can be or may not be physically separate, show as unit
The part shown can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected to realize the mesh of this embodiment scheme according to the actual needs
's.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, can also
That unit is individually physically present, can also two or more units it is integrated in a unit.Above-mentioned integrated list
Member can both be realized in the form of hardware, can also be realized in the form of SFU software functional unit.
If the integrated module/unit realized in the form of SFU software functional unit and as independent production marketing or
In use, it can be stored in a computer read/write memory medium.Based on such understanding, the present invention realizes above-mentioned implementation
All or part of flow in example method, by computer program the hardware of correlation can also be instructed to complete, described meter
Calculation machine program can be stored in a computer-readable recording medium, and the computer program can be achieved when being executed by processor
The step of stating each embodiment of the method..Wherein, the computer program includes computer program code, the computer program
Code can be source code form, object identification code form, executable file or some intermediate forms etc..Computer-readable Jie
Matter can include:Can carry any entity or device of the computer program code, recording medium, USB flash disk, mobile hard disk,
Magnetic disc, CD, computer storage, read-only storage (ROM, Read-Only Memory), random access memory (RAM,
Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It is it should be noted that described
The content that computer-readable medium includes can carry out appropriate increasing according to legislation in jurisdiction and the requirement of patent practice
Subtract, such as in some jurisdictions, electric carrier signal and electricity are not included according to legislation and patent practice, computer-readable medium
Believe signal.
Embodiment described above is merely illustrative of the technical solution of the present invention, rather than its limitations;Although with reference to foregoing reality
Example is applied the present invention is described in detail, it will be understood by those within the art that:It still can be to foregoing each
Technical scheme described in embodiment is modified, or carries out equivalent substitution to which part technical characteristic;And these are changed
Or replace, the essence of appropriate technical solution is departed from the spirit and scope of various embodiments of the present invention technical scheme, all should
Within protection scope of the present invention.
Claims (10)
- A kind of 1. virtual reality exchange method, it is characterised in that for interacting between real user and virtual scene, including:Receive the first image information of the first user and the first heat transfer agent of the first user from collector, described first First position information of first user described in image information identifier in virtual scene;First user behavior of first user in virtual scene is determined according to first heat transfer agent;Virtual field is rendered according to the Viewing-angle information of the first position information, first user behavior and first user Scape is simultaneously shown to first user.
- 2. virtual reality exchange method as claimed in claim 1, it is characterised in that described true according to first heat transfer agent Fixed first user behavior of first user in virtual scene, including:According to the heat transfer agent, the attitude information of identification first user;Judge whether the attitude information meets preparatory condition;If, it is determined that user behavior of first user in virtual scene is user's row corresponding to the preparatory condition For.
- 3. virtual reality exchange method as claimed in claim 2, it is characterised in that the preparatory condition includes:Distance threshold Condition, bone bend at least one in threshold condition, bone stretching, extension threshold value and threshold speed condition.
- 4. virtual reality exchange method as claimed in claim 3, it is characterised in that it is described reception the first image information and Before the first heat transfer agent from collector, the exchange method also includes:Define user behavior corresponding to the preparatory condition;The user behavior includes:Stretching, flexure operation or shaking motion.
- 5. virtual reality exchange method as claimed in claim 1, it is characterised in that the first image information of the reception, including:Receive the first position information that is described dynamic catching camera collection and being transmitted by camera server.
- 6. the virtual reality exchange method as any one of claim 1 to 5, it is characterised in that when real user is extremely When few two, believed described according to the visual angle of the first position information, first user behavior and first user Breath renders virtual scene and is shown to before first user, and methods described also includes:First heat transfer agent is uploaded to virtual scene server;AndReceive second user behavior and the second place information for the second user that the virtual scene server transmits;Second user Behavior is determined according to the second heat transfer agent of the second user;The Viewing-angle information according to the first position information, first user behavior and first user renders void Intend scene and be shown to first user, including:According to the first position information, second place information, first user behavior, second user behavior and described The Viewing-angle information of one user renders virtual scene and is shown to first user.
- A kind of 7. virtual reality interactive device, it is characterised in that for interacting between real user and virtual scene, including: Memory, processor and it is stored in the computer program that can be run in the memory and on the processor, the place The step of reason device realizes the method any one of claim 1 to 6 when performing the computer program.
- 8. a kind of virtual reality interactive system, it is characterised in that the system includes:Image collecting device, collector, client And Helmet Mounted Display;Described image harvester, for gathering the first image information and being transmitted to the client;Described first image information mark Know:First position information of first user in virtual scene;The collector, for gathering the first heat transfer agent of the first user and being transmitted to the client;The client, for determining first user of first user in virtual scene according to first heat transfer agent Behavior, and rendered according to the Viewing-angle information of the first position information, first user behavior and first user Virtual scene is simultaneously shown to first user by the Helmet Mounted Display.
- 9. virtual reality interactive system as claimed in claim 8, it is characterised in that described image harvester is specially optics It is dynamic to catch image collecting device, including:At least two dynamic catch camera, and camera server;Described move catches camera, for gathering the first image information of first user and being transferred to the camera server;The camera server is used to give described first image information transfer to the client.
- 10. virtual reality interactive system as claimed in claim 8 or 9, it is characterised in that when real user is at least two When, the system also includes:Virtual scene server;The client is used to first heat transfer agent being uploaded to virtual scene server;And receive the virtual scene The second user behavior for the second user that server transmits and second place information;Second user behavior is used according to described second What second heat transfer agent at family determined;And according to the first position information, second place information, first user behavior, The Viewing-angle information of second user behavior and first user render virtual scene, and are shown to by the Helmet Mounted Display First user.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010950764.2A CN112198959A (en) | 2017-07-28 | 2017-07-28 | Virtual reality interaction method, device and system |
CN201710632426.2A CN107479699A (en) | 2017-07-28 | 2017-07-28 | Virtual reality exchange method, apparatus and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710632426.2A CN107479699A (en) | 2017-07-28 | 2017-07-28 | Virtual reality exchange method, apparatus and system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010950764.2A Division CN112198959A (en) | 2017-07-28 | 2017-07-28 | Virtual reality interaction method, device and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107479699A true CN107479699A (en) | 2017-12-15 |
Family
ID=60598008
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710632426.2A Pending CN107479699A (en) | 2017-07-28 | 2017-07-28 | Virtual reality exchange method, apparatus and system |
CN202010950764.2A Pending CN112198959A (en) | 2017-07-28 | 2017-07-28 | Virtual reality interaction method, device and system |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010950764.2A Pending CN112198959A (en) | 2017-07-28 | 2017-07-28 | Virtual reality interaction method, device and system |
Country Status (1)
Country | Link |
---|---|
CN (2) | CN107479699A (en) |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108595004A (en) * | 2018-04-23 | 2018-09-28 | 新华网股份有限公司 | More people's exchange methods, device and relevant device based on Virtual Reality |
CN109064818A (en) * | 2018-07-23 | 2018-12-21 | 国网电力科学研究院武汉南瑞有限责任公司 | A kind of power equipment training system based on virtual reality |
CN110442229A (en) * | 2018-05-04 | 2019-11-12 | 脸谱科技有限责任公司 | Display in reality environment redirects |
CN111047710A (en) * | 2019-12-03 | 2020-04-21 | 深圳市未来感知科技有限公司 | Virtual reality system, interactive device display method, and computer-readable storage medium |
CN111796670A (en) * | 2020-05-19 | 2020-10-20 | 北京北建大科技有限公司 | Large-space multi-person virtual reality interaction system and method |
CN111813216A (en) * | 2019-08-14 | 2020-10-23 | 北京京东尚科信息技术有限公司 | Apparatus, method, system and storage medium for browsing items |
CN111984114A (en) * | 2020-07-20 | 2020-11-24 | 深圳盈天下视觉科技有限公司 | Multi-person interaction system based on virtual space and multi-person interaction method thereof |
CN112040209A (en) * | 2020-09-14 | 2020-12-04 | 龙马智芯(珠海横琴)科技有限公司 | VR scene projection method and device, projection system and server |
CN112581630A (en) * | 2020-12-08 | 2021-03-30 | 北京外号信息技术有限公司 | User interaction method and system |
WO2021093703A1 (en) * | 2019-11-11 | 2021-05-20 | 北京外号信息技术有限公司 | Interaction method and system based on optical communication apparatus |
CN112862935A (en) * | 2021-03-16 | 2021-05-28 | 天津亚克互动科技有限公司 | Game character motion processing method and device, storage medium and computer equipment |
CN112866286A (en) * | 2018-10-29 | 2021-05-28 | 深圳市瑞立视多媒体科技有限公司 | Data transmission method and device, terminal equipment and storage medium |
CN114201039A (en) * | 2020-09-18 | 2022-03-18 | 聚好看科技股份有限公司 | Display equipment for realizing virtual reality |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113110733A (en) * | 2021-03-03 | 2021-07-13 | 卓才互能(广州)科技股份有限公司 | Virtual field interaction method and system based on remote duplex |
CN116069154A (en) * | 2021-10-29 | 2023-05-05 | 北京字节跳动网络技术有限公司 | Information interaction method, device, equipment and medium based on enhanced display |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103460256A (en) * | 2011-03-29 | 2013-12-18 | 高通股份有限公司 | Anchoring virtual images to real world surfaces in augmented reality systems |
CN104834384A (en) * | 2015-06-01 | 2015-08-12 | 凌亚 | Device and method for improving motion guiding efficiency |
CN205581785U (en) * | 2016-04-15 | 2016-09-14 | 向京晶 | Indoor virtual reality interactive system of many people |
CN106125903A (en) * | 2016-04-24 | 2016-11-16 | 林云帆 | Many people interactive system and method |
CN106843460A (en) * | 2016-12-13 | 2017-06-13 | 西北大学 | The capture of multiple target position alignment system and method based on multi-cam |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105807922B (en) * | 2016-03-07 | 2018-10-02 | 湖南大学 | Implementation method that a kind of amusement of virtual reality drives, apparatus and system |
CN106502427B (en) * | 2016-12-15 | 2023-12-01 | 北京国承万通信息科技有限公司 | Virtual reality system and scene presenting method thereof |
-
2017
- 2017-07-28 CN CN201710632426.2A patent/CN107479699A/en active Pending
- 2017-07-28 CN CN202010950764.2A patent/CN112198959A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103460256A (en) * | 2011-03-29 | 2013-12-18 | 高通股份有限公司 | Anchoring virtual images to real world surfaces in augmented reality systems |
CN104834384A (en) * | 2015-06-01 | 2015-08-12 | 凌亚 | Device and method for improving motion guiding efficiency |
CN205581785U (en) * | 2016-04-15 | 2016-09-14 | 向京晶 | Indoor virtual reality interactive system of many people |
CN106125903A (en) * | 2016-04-24 | 2016-11-16 | 林云帆 | Many people interactive system and method |
CN106843460A (en) * | 2016-12-13 | 2017-06-13 | 西北大学 | The capture of multiple target position alignment system and method based on multi-cam |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108595004A (en) * | 2018-04-23 | 2018-09-28 | 新华网股份有限公司 | More people's exchange methods, device and relevant device based on Virtual Reality |
CN110442229A (en) * | 2018-05-04 | 2019-11-12 | 脸谱科技有限责任公司 | Display in reality environment redirects |
CN109064818A (en) * | 2018-07-23 | 2018-12-21 | 国网电力科学研究院武汉南瑞有限责任公司 | A kind of power equipment training system based on virtual reality |
CN112866286B (en) * | 2018-10-29 | 2023-03-14 | 深圳市瑞立视多媒体科技有限公司 | Data transmission method and device, terminal equipment and storage medium |
CN112866286A (en) * | 2018-10-29 | 2021-05-28 | 深圳市瑞立视多媒体科技有限公司 | Data transmission method and device, terminal equipment and storage medium |
CN111813216A (en) * | 2019-08-14 | 2020-10-23 | 北京京东尚科信息技术有限公司 | Apparatus, method, system and storage medium for browsing items |
CN111813216B (en) * | 2019-08-14 | 2024-04-09 | 北京京东尚科信息技术有限公司 | Apparatus, method, system and storage medium for browsing articles |
WO2021093703A1 (en) * | 2019-11-11 | 2021-05-20 | 北京外号信息技术有限公司 | Interaction method and system based on optical communication apparatus |
CN111047710A (en) * | 2019-12-03 | 2020-04-21 | 深圳市未来感知科技有限公司 | Virtual reality system, interactive device display method, and computer-readable storage medium |
CN111047710B (en) * | 2019-12-03 | 2023-12-26 | 深圳市未来感知科技有限公司 | Virtual reality system, interactive device display method, and computer-readable storage medium |
CN111796670A (en) * | 2020-05-19 | 2020-10-20 | 北京北建大科技有限公司 | Large-space multi-person virtual reality interaction system and method |
CN111984114B (en) * | 2020-07-20 | 2024-06-18 | 深圳盈天下视觉科技有限公司 | Multi-person interaction system based on virtual space and multi-person interaction method thereof |
CN111984114A (en) * | 2020-07-20 | 2020-11-24 | 深圳盈天下视觉科技有限公司 | Multi-person interaction system based on virtual space and multi-person interaction method thereof |
CN112040209A (en) * | 2020-09-14 | 2020-12-04 | 龙马智芯(珠海横琴)科技有限公司 | VR scene projection method and device, projection system and server |
CN112040209B (en) * | 2020-09-14 | 2021-09-03 | 龙马智芯(珠海横琴)科技有限公司 | VR scene projection method and device, projection system and server |
CN114201039B (en) * | 2020-09-18 | 2023-08-29 | 聚好看科技股份有限公司 | Display device for realizing virtual reality |
CN114201039A (en) * | 2020-09-18 | 2022-03-18 | 聚好看科技股份有限公司 | Display equipment for realizing virtual reality |
CN112581630A (en) * | 2020-12-08 | 2021-03-30 | 北京外号信息技术有限公司 | User interaction method and system |
CN112581630B (en) * | 2020-12-08 | 2024-06-21 | 北京移目科技有限公司 | User interaction method and system |
CN112862935A (en) * | 2021-03-16 | 2021-05-28 | 天津亚克互动科技有限公司 | Game character motion processing method and device, storage medium and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN112198959A (en) | 2021-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107479699A (en) | Virtual reality exchange method, apparatus and system | |
CN107820593B (en) | Virtual reality interaction method, device and system | |
CN103765410B (en) | Interference based augmented reality hosting platforms | |
WO2018095273A1 (en) | Image synthesis method and device, and matching implementation method and device | |
CN110339569B (en) | Method and device for controlling virtual role in game scene | |
CN109685909A (en) | Display methods, device, storage medium and the electronic device of image | |
CN109448099A (en) | Rendering method, device, storage medium and the electronic device of picture | |
CN108273265A (en) | The display methods and device of virtual objects | |
CN109671141B (en) | Image rendering method and device, storage medium and electronic device | |
KR20090110357A (en) | Augmented reality method and devices using a real time automatic tracking of marker-free textured planar geometrical objects in a video stream | |
CN107911737A (en) | Methods of exhibiting, device, computing device and the storage medium of media content | |
US11750873B2 (en) | Video distribution device, video distribution method, and video distribution process | |
CN108874114A (en) | Realize method, apparatus, computer equipment and the storage medium of virtual objects emotion expression service | |
CN110251942A (en) | Control the method and device of virtual role in scene of game | |
CN109035415B (en) | Virtual model processing method, device, equipment and computer readable storage medium | |
CN108416832A (en) | Display methods, device and the storage medium of media information | |
CN111508033A (en) | Camera parameter determination method, image processing method, storage medium, and electronic apparatus | |
CN108837510A (en) | Methods of exhibiting and device, storage medium, the electronic device of information | |
CN110339571A (en) | Event generation method and device, storage medium and electronic device | |
CN111124902A (en) | Object operating method and device, computer-readable storage medium and electronic device | |
CN107609946A (en) | A kind of display control method and computing device | |
CN108229678A (en) | Network training method, method of controlling operation thereof, device, storage medium and equipment | |
CN107172136A (en) | The synchronous method and device of voxel data | |
CN110349504A (en) | A kind of museum guiding system based on AR | |
CN110096144A (en) | A kind of interaction holographic projection methods and system based on three-dimensional reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171215 |
|
RJ01 | Rejection of invention patent application after publication |