CN107885334A - A kind of information processing method and virtual unit - Google Patents
A kind of information processing method and virtual unit Download PDFInfo
- Publication number
- CN107885334A CN107885334A CN201711184303.3A CN201711184303A CN107885334A CN 107885334 A CN107885334 A CN 107885334A CN 201711184303 A CN201711184303 A CN 201711184303A CN 107885334 A CN107885334 A CN 107885334A
- Authority
- CN
- China
- Prior art keywords
- data
- information
- scene
- virtual
- attitude
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- User Interface Of Digital Computer (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides a kind of information processing method and virtual unit, wherein methods described includes:Obtain the first data on the first scene, first data include the first object first position information and the first attitude information in the first scene;The second data are obtained, second data include second place information and the second attitude information;Based on first data and the second data, control shows first object.The present invention can realize that multiple virtual units perform operation to identical virtual objects.
Description
Technical field
The present embodiments relate to field of information processing, more particularly to a kind of information processing method and virtual unit.
Background technology
In existing virtual unit, such as AR equipment, user can directly be seen by the display unit (such as eyeglass) of AR equipment
To the real world of outside.And pass through being shown as can be with before eyes while virtual three-dimensional object image is presented on eyeglass.
But in the prior art, it can only realize and virtual objects are shown on the virtual unit worn in user itself, and can not be multiple
The operation for same virtual objects is realized in virtual unit jointly.That is, how to make same dummy object, by multiple differences
Wearing AR equipment people observation and cooperating turn into one it is significant the problem of.
The content of the invention
It can realize that multiple virtual units perform operation to identical virtual objects the embodiments of the invention provide a kind of
Information processing method and virtual unit.
In order to solve the above-mentioned technical problem, the embodiments of the invention provide following technical scheme:
A kind of information processing method, it is applied in virtual unit, and including:
The first data on the first scene are obtained, first data include the first object first in the first scene
Positional information and the first attitude information;
The second data are obtained, second data include second place information and the second attitude information;
Based on first data and the second data, control shows first object.
Wherein, described obtain includes on the first data of the first scene:
Send the solicited message for asking first data;
The return information returned is received, and first data are obtained from the return information.
Wherein, the second data of the acquisition include:
Receive input the second data either receives transmit the second data or
Obtain second data of the virtual unit in first scene.
Wherein, described to be based on first data and the second data, control shows that first object includes:
The relative attitude of first object is determined based on second attitude information and the first attitude information;
First object is shown based on the second place information and the relative Attitude Control for Spacecraft.
Wherein methods described also includes:
Receive the operational order to the second object;
Based on the operational order, first data are generated;
Upload first data;
Wherein, first object and the second object are identical or different.
The embodiment of the present invention additionally provides a kind of virtual unit, and it includes:
Processor, it is configured to obtain the first data on the first scene, and the second data relative to the first scene,
And first data and the second data are based on, control shows first object;Wherein described first data include first pair
As the first position information and the first attitude information in the first scene, second data include second place information and second
Attitude information.
Wherein described virtual unit also includes:
Acquisition module, it is configured to obtain the first data and the second data, and by the first data and the second data of acquisition
Transmit to the processor.
Wherein, the acquisition module is further configured to send the solicited message for being used for asking first data, receives
The return information of return, and first data are obtained from the return information.
Wherein, the acquisition module is further configured to receive the second data of input or receives the second number of transmission
According to, or obtain second data of the virtual unit in first scene.
Wherein, the processor determines the phase of first object based on second attitude information and the first attitude information
First object is shown to posture, and based on the second place information and the relative Attitude Control for Spacecraft.
It can be known based on embodiment disclosed above, the embodiment of the present invention possesses following beneficial effect:
The embodiment of the present invention can cause multiple virtual units to perform operation to identical virtual objects under Same Scene,
Operate, can also apply in the applications such as virtual design displaying explanation, while also enhance and virtually set while realizing multi-user
Standby Consumer's Experience.
Brief description of the drawings
Fig. 1 is the principle flow chart of the information processing method in the embodiment of the present invention;
Fig. 2 is the principle flow chart that the first data on the first scene are obtained in the embodiment of the present invention;
Fig. 3 is the method flow diagram that control shows the first object in the embodiment of the present invention;
Fig. 4 is the theory structure schematic diagram of the virtual unit in the embodiment of the present invention.
Embodiment
Below, the specific embodiment of the present invention is described in detail with reference to accompanying drawing, but it is not as limiting to the invention.
It should be understood that disclosed embodiments can be made with various modifications.Therefore, description above should not regard
To limit, and only as the example of embodiment.Those skilled in the art will expect within the scope and spirit of this
Other modifications.
Comprising in the description and the accompanying drawing of a part for constitution instruction shows embodiment of the disclosure, and with it is upper
What face provided is used to explain the disclosure together to the substantially description of the disclosure and the detailed description given below to embodiment
Principle.
It is of the invention by the description to the preferred form of the embodiment that is given as non-limiting examples with reference to the accompanying drawings
These and other characteristic will become apparent.
It is also understood that although with reference to some instantiations, invention has been described, but people in the art
Member realize with can determine the present invention many other equivalents, they have feature as claimed in claim and therefore all
In the protection domain limited whereby.
When read in conjunction with the accompanying drawings, in view of described further below, above and other aspect, the feature and advantage of the disclosure will become
It is more readily apparent.
The specific embodiment of the disclosure is described hereinafter with reference to accompanying drawing;It will be appreciated, however, that the disclosed embodiments are only
The example of the disclosure, it can use various ways to implement.Function and structure that is known and/or repeating is not described in detail to avoid
Unnecessary or unnecessary details make it that the disclosure is smudgy.Therefore, specific structural and feature disclosed herein is thin
Section is not intended to restrictions, but as just the basis of claim and representative basis for instruct those skilled in the art with
Substantially any appropriate detailed construction diversely uses the disclosure.
This specification can be used phrase " in one embodiment ", " in another embodiment ", " in another embodiment
In " or " in other embodiments ", it may refer to according to one or more of identical or different embodiment of the disclosure.
Below, the embodiment of the present invention is described in detail with reference to accompanying drawing, the embodiments of the invention provide a kind of information processing side
Method, it can realize that different users carries out display for same virtual objects by respective virtual unit and checks and control,
Operate, can also apply in the applications such as virtual design displaying explanation, while also enhance virtual while multi-user can be realized
The Consumer's Experience of equipment.
The embodiment of the present invention can be under the scene that user is virtually shown using virtual unit, for example, at first
Jing Zhong, user can think to add the virtual objects for wanting displaying in first scene by virtual unit, or can also adjust
The various actual states of the virtual objects.First scene can be real reality scene, or virtual scene graph
Picture.And the application to be accomplished that and same virtual objects can be controlled with multiple users, and can be according to difference
The position of user or posture carry out the dispaly state of corresponding adjustment virtual objects.For example, in the same area scope, user A and
User B can be controlled and show to the virtual objects in the scene of the regional extent simultaneously.And user A is to virtual right
, can be according to the corresponding displaying on user's B virtual units of user B positional information and attitude information through user during as being controlled
The virtual objects of A controls, are operated while so as to realize multi-user for virtual objects and corresponding displaying.
Specifically, as shown in figure 1, principle flow chart for a kind of information processing method in the embodiment of the present invention, wherein
It can include:
The first data on the first scene are obtained, first data include the first object first in the first scene
Positional information and the first attitude information;
The second data are obtained, second data include second place information and the second attitude information;
Based on first data and the second data, control shows first object.
It can be applied in the above method of the embodiment of the present invention in virtual unit, void is performed by virtual unit in user
When intending the displaying operation or virtual unit execution virtual display operation of object, first under the first current scene can be obtained
Data, first data can include the first position information and the first attitude information of the first object under the first scene.Here
The first object can be included in virtual objects shown in the first scene, i.e. user is held under the first scene using virtual unit
During row virtual display program, the first data of each virtual objects under first scene can be obtained first, or, it can select
When selecting the virtual objects to be added in the first scene, the first data of the virtual objects to be added, the first number are obtained
According to the first position information in the first scene and the first attitude information in the first scene that can include virtual objects.The
One positional information is position of the virtual objects in the first scene, and the first attitude information can then exist including the virtual objects
The information such as angle, direction, current intelligence in the first scene.In addition, the first scene in the embodiment of the present invention can be virtual
Real scene in the virtual scene or true environment of equipment displaying.
Meanwhile when performing virtual display, virtual unit can also obtain the second data, second data can include using
Family or the current positional information of virtual unit and attitude information, i.e. said second position information and the second attitude information.This hair
In bright embodiment, Synchronization Control and displaying of the multi-user for the virtual objects under the first scene are accomplished that, therefore, it is necessary to tie
The current position in family or attitude information are shared to determine relative position and relative attitude of the virtual objects relative to each user
Information, the display state on the virtual unit of each user is determined so as to combine above- mentioned information.For diverse location or difference
The user of posture, it all may be different relative to the relative position or relative attitude of virtual objects in the first scene, and this
Application determines virtual objects under different user perspectives by determining the change of above-mentioned relative position and relative attitude
Dispaly state.I.e. after the first data and the second data are obtained, the first data and the second data that can be based on acquisition, control show
Show first object.
Based on above-mentioned configuration, you can to realize control of the different users for same virtual objects, and can correspond to
Be shown in the virtual unit of other user objects, increase user between Consumer's Experience.
Further, the following detailed description of the principle for the first data that the first scene is obtained in the embodiment of the present invention.
Wherein, in the embodiment of the present invention, it can be performed when receiving the first trigger signal and obtain the first of the first scene
The acquisition operation of data.
Wherein, virtual unit can generate first trigger signal on startup, you can with when starting virtual unit
Perform the first data acquisition operations under the first current scene.Or virtual unit can also be by detecting predetermined registration operation next life
Into the first trigger signal, the first trigger signal is such as generated when detecting that programmable button is pressed, or detecting default hand
The first trigger signal is generated during gesture etc..Or it can also generate first when detecting the addition information on virtual objects and touch
Signal.I.e. user, can when virtual objects are added to and virtually shown in the first scene by operation selection on virtual unit
To generate the first trigger signal, to perform the operation for obtaining the first data on the virtual objects.But the embodiment of the present invention is not
It is limited to this, those skilled in the art can also realize the generation and detection of the first trigger signal by other configurations, so as to perform
The acquisition operation of first data.
As shown in Fig. 2 to obtain the principle flow chart of the first data on the first scene in the embodiment of the present invention, wherein
It can include:
Send the solicited message for asking first data;
The return information returned is received, and first data are obtained from the return information.
Specifically, when performing the acquisition operation of the first data, can be to server apparatus or other for managing
Solicited message is sent with the equipment (following general designation server apparatus) for the data message for storing the virtual objects under each scene, and can
To receive the return information corresponding to the solicited message, the first asked data message can be included in the return information.
First data of above-mentioned request can be the first data or the choosing of virtual objects all under the first scene
First data of the virtual objects selected out.The solicited message of above-mentioned first data can include the identification information on the first scene
And/or the identification information of virtual objects, so as to which server apparatus can know that to be asked is virtual right in which scene
The first data of elephant, and the first data of which virtual objects to be asked.The identification information of above-mentioned first scene can be with
It is the position range information on the first scene.The identification information of virtual objects can include title of virtual objects etc. can be only
The information of virtual objects in one scene of determination first.
Based on above-mentioned, can also include before generation is on the solicited message of the first data:Obtain the of the first scene
Second identification information of one identification information and virtual objects.First identification information of wherein the first scene can be filled by positioning
Put and carry out automatic identification to obtain, can also be obtained by way of the input information for receiving user.The second of virtual objects
Identification information can obtain by way of user inputs, such as receive the mode of the selection information on virtual objects.
After the first identification information and the second identification information is obtained, you can be marked based on first identification information and second
Know solicited message of the information generation on the first data, and the solicited message is sent to corresponding server apparatus.Server is set
It is standby that the first identification information and the second identification information therein can be obtained after the solicited message of first data is received, and look into
Ask and obtain and correspond to first data of second identification information in the first scene corresponding to the first identification information, and according to this
Inquiry and the first data generation return information obtained, so as to which return information is sent to corresponding virtual unit.Wherein, service
Device equipment can perform encryption processing to the first data inquired about and obtained, and ensure the security of data.
Here, the data on the virtual objects under each scene stored in server apparatus can be at virtual unit
Obtain, user by virtual unit to virtual objects when being operated, the correlation of the virtual objects after can this be operated
Data are sent to be stored into server apparatus, so that other users obtain related data and the displaying synchronized.
After the return information returned from server apparatus is received, processing the return information can be decrypted, and
Therefrom parse the first data of asked virtual objects.
Based on above-mentioned configuration, you can easily to realize the shared of the data message of the virtual objects in scene, realize more
Operated while user.
In addition, the operation for obtaining the second data is can also carry out in the embodiment of the present invention, as described above, the second data can be with
Including user object second place information and the second attitude information under the first scene.Here, obtained in the embodiment of the present invention
Second data can include:
Receive input the second data either receives transmit the second data or
Obtain second data of the virtual unit in first scene.
In embodiments of the present invention, user can voluntarily input its second place information and the second posture by input module
Information.The input module can include touch screen, Audio Input Modules etc..Or virtual unit can also receive other electronics
Second data of equipment transmission, are such as communicated by communication module with other electronic equipments, to obtain from other electronic equipments
Second data of transmission.Alternatively, it is also possible to detect the information of the second data of virtual unit by the detection module of setting, such as
Can the positional information current to virtual unit position, to obtain second place information, and can also be to virtual unit
Relative to the current pose infomation detection of the first scene, so as to obtain the second attitude information.Meanwhile include other in the second data
During information, detection module can also perform corresponding detection.
By above-mentioned configuration, the acquisition of the second data can be both realized, user can directly carry out the under the first scene
The detection and acquisition of two data, when user is not in the first scene, the second data of the first scene can also be inputted to realize
The displaying and control of the virtual objects of corresponding second scene.
After the first data and the second data are obtained, you can controlled with the displaying of virtual objects corresponding to execution.Such as Fig. 3 institutes
Show, be the method flow diagram that control shows the first object in the embodiment of the present invention.It can wherein include:
The relative attitude of first object is determined based on second attitude information and the first attitude information;
First object is shown based on the second place information and the relative Attitude Control for Spacecraft.
Based on described above, the first data of the first object in the case where obtaining the first scene, and relative to the first scene
The second data when, you can to perform the operation that is controlled based on the displaying of the first data and the second data to the first object.
Specifically, can be true based on the second attitude information in the first attitude information and the second data in the first data
Fixed first object corresponds to the relative attitude information of active user, it is determined that during the relative attitude, you can with based on the second place
Information and relative attitude determine dispaly state of first object under the visual angle of active user.
Above-mentioned attitude information or relative attitude can include the first object relative to the aobvious of the object of reference in the first scene
Show the information such as angle, orientation, dynamic effect, so as to easily determine dispaly state of first object under the first scene.
Based on above-mentioned configuration, you can to realize display control of the different users to same target, and can be based on each
The positional information of user object and relative attitude information carry out different display controls.
Further, the user object in the embodiment of the present invention can also be carried out to the display effect of the virtual objects of displaying
Adjust or carry out other operations, can specifically include:
Receive the operational order to the second object;
Based on the operational order, first data are generated;
Upload first data;
Wherein, first object and the second object are identical or different.
Virtual unit can receive the operational order on virtual objects from user in real time, as described above the second object
Operational order, the operational order can include:Deletion, increase, the control of dynamic effect, control of display effect etc. instruct,
Various operations to virtual objects can serve as embodiments of the invention.
User can be operated by virtual unit to the virtual objects under the first scene, and virtual unit can be with simultaneously
Display control to virtual objects is performed based on the operational order, while the first current data of virtual objects are uploaded to service
Device equipment is stored, in order to which other users obtain status information (the first data of the virtual objects under the first scene
Information).
In summary, the embodiment of the present invention can cause multiple virtual units under Same Scene to identical virtual objects
Operation is performed, is operated while realizing multi-user, can also be applied in the applications such as virtual design displaying explanation, while also strengthen
The Consumer's Experience of virtual unit.
In addition, the embodiment of the present invention additionally provides a kind of virtual unit, the virtual unit can apply above-described embodiment institute
The information processing method stated, also, the theory structure schematic diagram for the virtual unit being illustrated in figure 4 in the embodiment of the present invention.
Wherein, the virtual unit in the embodiment of the present invention can include processor 1, acquisition module 2 and virtual display module
3。
Wherein, processor 1 can obtain the first data on the first scene, and the second data, and based on described the
One data and the second data, control show first object;Wherein described first data include the first object in the first scene
In first position information and the first attitude information, second data include second place information and the second attitude information.
Wherein, acquisition module 2 can be used for obtaining above-mentioned first data and the second data, and by the first data of acquisition and
Second data transfer is to the processor 1.
Specifically, acquisition module 2 can perform the displaying operation or virtual of virtual objects in user by virtual unit
When equipment performs virtual display operation, the first data under the first current scene are obtained, first data can include first
The first position information and the first attitude information of the first object under scene.Here the first object can be included in the first scene
Shown in virtual objects, i.e., user under the first scene using virtual unit perform virtual display program when, can obtain first
The first data of each virtual objects under first scene are taken, or, the void that can be added in selection in the first scene
When intending object, the first data of the virtual objects are obtained, the first data can include the in the first scene of virtual objects
One positional information and the first attitude information in the first scene.First position information is virtual objects in the first scene
Position, and the first attitude information then can be including the letter such as angle of the virtual objects in the first scene, direction, current intelligence
Breath.
Meanwhile when performing virtual display, acquisition module 2 can also obtain the second data, second data can include
User or the current positional information of virtual unit and attitude information, i.e. said second position information and the second attitude information.This
In inventive embodiments, Synchronization Control and displaying of the multi-user for the virtual objects under the first scene are accomplished that, therefore, it is necessary to
Determine virtual objects relative to the relative position of each user and relative appearance with reference to the current position of user or attitude information
State information, display state on the virtual unit of each user is determined so as to combine above- mentioned information, i.e., obtain the first data and
After second data, the first data and the second data that can be based on acquisition, control show first object.
Based on above-mentioned configuration, you can to realize control of the different users for same virtual objects, and can correspond to
Be shown in the virtual unit of other user objects, increase user between Consumer's Experience.
Further, the following detailed description of the principle for the first data that the first scene is obtained in the embodiment of the present invention.
Wherein, in the embodiment of the present invention, it can be performed when receiving the first trigger signal and obtain the first of the first scene
The acquisition operation of data.
Wherein, first trigger signal can be generated when virtual unit is activated, i.e. acquisition module 2 can be in virtual unit
The first trigger signal is received when being activated, and performs the first data acquisition operations under the first current scene.It is or virtual
Equipment can also generate the first trigger signal by detecting predetermined registration operation, and is such as generated when detecting that programmable button is pressed
One trigger signal, or generate the first trigger signal when detecting default gesture etc..Or it can also detect on void
When intending the addition information of object, the first trigger signal is generated.I.e. user operates selection on virtual unit and adds virtual objects
When virtually being shown into the first scene, the first trigger signal can be generated, and acquisition module 2 corresponding can perform acquisition
Operation on the first data of the virtual objects.But not limited to this of the embodiment of the present invention, those skilled in the art can also lead to
Generation and detection that other configurations realize the first trigger signal are crossed, so as to perform the acquisition of the first data operation.
In addition, acquisition module 2 can be further configured to send the solicited message for being used for asking first data, receive
The return information of return, and first data are obtained from the return information.
Specifically, after acquisition module 2 performs the acquisition operation of the first data, can be to server apparatus or others
The equipment (following general designation server apparatus) of data message for managing and storing the virtual objects under each scene sends request
Information, and the return information corresponding to the solicited message can be received, asked first can be included in the return information
Data message.
First data of above-mentioned request can be the first data or the choosing of virtual objects all under the first scene
First data of the virtual objects selected out.The solicited message of above-mentioned first data can include the identification information on the first scene
And/or the identification information of virtual objects, so as to which server apparatus can know that to be asked is virtual right in which scene
The first data of elephant, and the first data of which virtual objects to be asked.The identification information of above-mentioned first scene can be with
It is the position range information on the first scene.The identification information of virtual objects can include title of virtual objects etc. can be only
The information of virtual objects in one scene of determination first.
Based on above-mentioned, can also include before acquisition module 2 is generated on the solicited message of the first data:Obtain first
First identification information of scene and the second identification information of virtual objects.First identification information of wherein the first scene can lead to
Cross positioner and carry out automatic identification to obtain, can also be obtained by way of the input information for receiving user.It is virtual right
The second identification information of elephant can obtain by way of user inputs, such as receive selection information on virtual objects
Mode.
After the first identification information and the second identification information is obtained, i.e., acquisition module 2 can be based on the first mark letter
Breath and the second identification information generate the solicited message on the first data, and send request letter to corresponding server apparatus
Breath.Server apparatus can obtain the first identification information therein and the second mark after the solicited message of first data is received
Know information, and inquire about and obtain the first number for corresponding to the second identification information in the first scene corresponding to the first identification information
According to, and return information is generated according to the first data of the inquiry and acquisition, virtually set so as to which return information is sent to corresponding
It is standby.Wherein, server apparatus can perform encryption processing to the first data inquired about and obtained, and ensure the security of data.
Here, the data on the virtual objects under each scene stored in server apparatus can be at virtual unit
Obtain, user by virtual unit to virtual objects when being operated, the correlation of the virtual objects after can this be operated
Data are sent to be stored into server apparatus, so that other users obtain related data and the displaying synchronized.
After acquisition module 2 receives the return information returned from server apparatus, the return information can be solved
Close processing, and the first data of asked virtual objects are therefrom parsed, and by the first data transmission after parsing to processing
Device 1 carries out follow-up display control processing.
Based on above-mentioned configuration, you can easily to realize the shared of the data message of the virtual objects in scene, realize more
Operated while user.
Further, the acquisition module 2 in the embodiment of the present invention can also carry out the operation for obtaining the second data, as above institute
State, the second data can include user object second place information and the second attitude information under the first scene.Here, obtain
The second data that module is further configured to receive input either receive the second data of transmission or obtain virtual unit in institute
State the second data in the first scene.
In embodiments of the present invention, user can voluntarily input its second place information and the second posture by input module
Information.The input module can include touch screen, Audio Input Modules etc..Or virtual unit can also receive other electronics
Second data of equipment transmission, are such as communicated by communication module with other electronic equipments, to obtain from other electronic equipments
Second data of transmission.Alternatively, it is also possible to detect the information of the second data of virtual unit by the detection module of setting, such as
Can the positional information current to virtual unit position, to obtain second place information, and can also be to virtual unit
Relative to the current pose infomation detection of the first scene, so as to obtain the second attitude information.Meanwhile include other in the second data
During information, detection module can also perform corresponding detection.Acquisition module 2 after the second data are obtained, can also by this second
Data, which are sent to processor 1, carries out follow-up display control processing.
By above-mentioned configuration, the acquisition of the second data can be both realized, user can directly carry out the under the first scene
The detection and acquisition of two data, when user is not in the first scene, the second data of the first scene can also be inputted to realize
The displaying and control of the virtual objects of corresponding second scene.
Wherein, processor 1 can be based on second attitude information and first after the first data and the second data are received
Attitude information determines the angles of display of first object, and aobvious based on the second place information and angles of display control
Show first object.
After the first data and the second data are obtained, you can controlled with the displaying of virtual objects corresponding to execution.Wherein locate
Reason device 1 can determine the relative attitude of first object based on second attitude information and the first attitude information, and be based on
The second place information and the virtual display module 3 of the relative Attitude Control for Spacecraft show the dispaly state of first object.
Based on described above, in the first data of first object of the processor 1 in the case where obtaining the first scene, and relative to
During the second data of the first scene, you can be controlled with performing based on the displaying of the first data and the second data to the first object
Operation.
Specifically, processor 1 is based on the second attitude information in the first attitude information and the second data in the first data
Can determine that the first object corresponds to the relative attitude information of active user, it is determined that during the relative attitude, you can with based on
Second place information and relative attitude determine dispaly state of first object under the visual angle of active user.Under the premise of this, place
Managing device 1 then corresponding can control virtual display module 3 to carry out display control to the dispaly state of corresponding virtual objects.
Above-mentioned attitude information or relative attitude can include the first object relative to the aobvious of the object of reference in the first scene
Show the information such as angle, orientation, dynamic effect, so as to easily determine dispaly state of first object under the first scene.
Based on above-mentioned configuration, you can to realize display control of the different users to same target, and can be based on each
The positional information of user object and relative attitude information carry out different display controls.
Further, the user object in the embodiment of the present invention can also be carried out to the display effect of the virtual objects of displaying
Other operations are adjusted or carried out, specifically, acquisition module 2 can also receive the operational order to the second object, and are based on
The operational order generates first data, and first data are uploaded to server apparatus;Wherein, first object and
Second object is identical or different.
Acquisition module 2 can receive the operational order on virtual objects from user in real time, second pair as described above
The operational order of elephant, the operational order can include:Deletion, increase, the control of dynamic effect, control of display effect etc. refer to
Order, the various operations to virtual objects can serve as embodiments of the invention.
User can be operated by virtual unit to the virtual objects under the first scene, and acquisition module 2 can simultaneously
To perform the display control to virtual objects based on the operational order, while the first current data of virtual objects are uploaded to clothes
Business device equipment is stored, in order to which other users obtain status information (the first number of the virtual objects under the first scene
It is believed that breath).
In addition, the virtual unit in the embodiment of the present invention can include augmented reality display device and virtual reality shows and set
Standby, such as virtual glasses device, user corresponding can obtain first when performing the displaying of virtual objects using virtual unit
Virtual objects information under scene, and carry out corresponding display control.
In summary, the embodiment of the present invention can cause multiple virtual units under Same Scene to identical virtual objects
Operation is performed, is operated while realizing multi-user, can also be applied in the applications such as virtual design displaying explanation, while also strengthen
The Consumer's Experience of virtual unit.
It is apparent to those skilled in the art that for convenience and simplicity of description, the data of foregoing description
The electronic equipment that processing method is applied to, the corresponding description in before-mentioned products embodiment is may be referred to, will not be repeated here.
Above example is only the exemplary embodiment of the present invention, is not used in the limitation present invention, protection scope of the present invention
It is defined by the claims.Those skilled in the art can make respectively in the essence and protection domain of the present invention to the present invention
Kind modification or equivalent substitution, this modification or equivalent substitution also should be regarded as being within the scope of the present invention.
Claims (10)
1. a kind of information processing method, it is applied in virtual unit, and including:
Obtain the first data on the first scene, first data include the first object the first position in the first scene
Information and the first attitude information;
The second data are obtained, second data include second place information and the second attitude information;
Based on first data and the second data, control shows first object.
2. according to the method for claim 1, wherein, described obtain includes on the first data of the first scene:
Send the solicited message for asking first data;
The return information returned is received, and first data are obtained from the return information.
3. according to the method for claim 1, wherein, the second data of the acquisition include:
Receive input the second data either receives transmit the second data or
Obtain second data of the virtual unit in first scene.
4. according to the method for claim 1, wherein, described to be based on first data and the second data, control shows institute
Stating the first object includes:
The relative attitude of first object is determined based on second attitude information and the first attitude information;
First object is shown based on the second place information and the relative Attitude Control for Spacecraft.
5. according to the method for claim 1, wherein methods described also includes:
Receive the operational order to the second object;
Based on the operational order, first data are generated;
Upload first data;
Wherein, first object and the second object are identical or different.
6. a kind of virtual unit, it includes:
Processor, it is configured to obtain the first data on the first scene, and the second data relative to the first scene, and base
In first data and the second data, control shows first object;Wherein described first data exist including the first object
First position information and the first attitude information in first scene, second data include second place information and the second posture
Information.
7. virtual unit according to claim 6, wherein also including:
Acquisition module, it is configured to obtain the first data and the second data, and by the first data and the second data transfer of acquisition
To the processor.
8. virtual unit according to claim 7, wherein, the acquisition module, which is further configured to send, to be used to ask institute
The solicited message of the first data is stated, receives the return information of return, and first data are obtained from the return information.
9. virtual unit according to claim 7, wherein, the acquisition module is further configured to receive the second of input
Data either receive the second data of transmission or obtain second data of the virtual unit in first scene.
10. virtual unit according to claim 6, wherein, the processor is based on second attitude information and first
Attitude information determines the relative attitude of first object, and is shown based on the second place information and the relative Attitude Control for Spacecraft
Show first object.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711184303.3A CN107885334B (en) | 2017-11-23 | 2017-11-23 | Information processing method and virtual equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711184303.3A CN107885334B (en) | 2017-11-23 | 2017-11-23 | Information processing method and virtual equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107885334A true CN107885334A (en) | 2018-04-06 |
CN107885334B CN107885334B (en) | 2021-10-22 |
Family
ID=61774757
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711184303.3A Active CN107885334B (en) | 2017-11-23 | 2017-11-23 | Information processing method and virtual equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107885334B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109920519A (en) * | 2019-02-20 | 2019-06-21 | 东软医疗系统股份有限公司 | The method, device and equipment of process image data |
CN109992108A (en) * | 2019-03-08 | 2019-07-09 | 北京邮电大学 | The augmented reality method and system of multiusers interaction |
CN110908509A (en) * | 2019-11-05 | 2020-03-24 | Oppo广东移动通信有限公司 | Multi-augmented reality device cooperation method and device, electronic device and storage medium |
CN111459432A (en) * | 2020-03-30 | 2020-07-28 | Oppo广东移动通信有限公司 | Virtual content display method and device, electronic equipment and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103562968A (en) * | 2011-03-29 | 2014-02-05 | 高通股份有限公司 | System for the rendering of shared digital interfaces relative to each user's point of view |
CN104780209A (en) * | 2015-04-07 | 2015-07-15 | 北京奇点机智信息技术有限公司 | Portable equipment and server for realizing sharing interface scenario |
CN105597317A (en) * | 2015-12-24 | 2016-05-25 | 腾讯科技(深圳)有限公司 | Virtual object display method, device and system |
CN105892650A (en) * | 2016-03-28 | 2016-08-24 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN106155326A (en) * | 2016-07-26 | 2016-11-23 | 北京小米移动软件有限公司 | Object identifying method in virtual reality communication and device, virtual reality device |
CN106200944A (en) * | 2016-06-30 | 2016-12-07 | 联想(北京)有限公司 | The control method of a kind of object, control device and control system |
US20170147064A1 (en) * | 2015-11-19 | 2017-05-25 | Samsung Electronics Co., Ltd. | Method and apparatus for providing information in virtual reality environment |
CN106774872A (en) * | 2016-12-09 | 2017-05-31 | 网易(杭州)网络有限公司 | Virtual reality system, virtual reality exchange method and device |
CN106789991A (en) * | 2016-12-09 | 2017-05-31 | 福建星网视易信息系统有限公司 | A kind of multi-person interactive method and system based on virtual scene |
CN106984043A (en) * | 2017-03-24 | 2017-07-28 | 武汉秀宝软件有限公司 | The method of data synchronization and system of a kind of many people's battle games |
-
2017
- 2017-11-23 CN CN201711184303.3A patent/CN107885334B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103562968A (en) * | 2011-03-29 | 2014-02-05 | 高通股份有限公司 | System for the rendering of shared digital interfaces relative to each user's point of view |
CN104780209A (en) * | 2015-04-07 | 2015-07-15 | 北京奇点机智信息技术有限公司 | Portable equipment and server for realizing sharing interface scenario |
US20170147064A1 (en) * | 2015-11-19 | 2017-05-25 | Samsung Electronics Co., Ltd. | Method and apparatus for providing information in virtual reality environment |
CN105597317A (en) * | 2015-12-24 | 2016-05-25 | 腾讯科技(深圳)有限公司 | Virtual object display method, device and system |
CN105892650A (en) * | 2016-03-28 | 2016-08-24 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN106200944A (en) * | 2016-06-30 | 2016-12-07 | 联想(北京)有限公司 | The control method of a kind of object, control device and control system |
CN106155326A (en) * | 2016-07-26 | 2016-11-23 | 北京小米移动软件有限公司 | Object identifying method in virtual reality communication and device, virtual reality device |
CN106774872A (en) * | 2016-12-09 | 2017-05-31 | 网易(杭州)网络有限公司 | Virtual reality system, virtual reality exchange method and device |
CN106789991A (en) * | 2016-12-09 | 2017-05-31 | 福建星网视易信息系统有限公司 | A kind of multi-person interactive method and system based on virtual scene |
CN106984043A (en) * | 2017-03-24 | 2017-07-28 | 武汉秀宝软件有限公司 | The method of data synchronization and system of a kind of many people's battle games |
Non-Patent Citations (1)
Title |
---|
许爱军: "多用户协同虚拟现实的技能训练系统", 《计算机系统应用》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109920519A (en) * | 2019-02-20 | 2019-06-21 | 东软医疗系统股份有限公司 | The method, device and equipment of process image data |
CN109992108A (en) * | 2019-03-08 | 2019-07-09 | 北京邮电大学 | The augmented reality method and system of multiusers interaction |
CN109992108B (en) * | 2019-03-08 | 2020-09-04 | 北京邮电大学 | Multi-user interaction augmented reality method and system |
CN110908509A (en) * | 2019-11-05 | 2020-03-24 | Oppo广东移动通信有限公司 | Multi-augmented reality device cooperation method and device, electronic device and storage medium |
CN111459432A (en) * | 2020-03-30 | 2020-07-28 | Oppo广东移动通信有限公司 | Virtual content display method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN107885334B (en) | 2021-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108269307B (en) | Augmented reality interaction method and equipment | |
US10567449B2 (en) | Apparatuses, methods and systems for sharing virtual elements | |
EP3615156B1 (en) | Intuitive augmented reality collaboration on visual data | |
JP6610546B2 (en) | Information processing apparatus, information processing method, and program | |
CN110851095B (en) | Multi-screen interactions in virtual and augmented reality | |
CN107390863B (en) | Device control method and device, electronic device and storage medium | |
CN107885334A (en) | A kind of information processing method and virtual unit | |
US10692113B2 (en) | Method for providing customized information through advertising in simulation environment, and associated simulation system | |
EP3250986A1 (en) | Method and system for implementing a multi-user virtual environment | |
US10521603B2 (en) | Virtual reality system for providing secured information | |
WO2015026626A1 (en) | Enabling remote screen sharing in optical see-through head mounted display with augmented reality | |
EP3286601B1 (en) | A method and apparatus for displaying a virtual object in three-dimensional (3d) space | |
CN104731338B (en) | One kind is based on enclosed enhancing virtual reality system and method | |
US20180059812A1 (en) | Method for providing virtual space, method for providing virtual experience, program and recording medium therefor | |
WO2019028855A1 (en) | Virtual display device, intelligent interaction method, and cloud server | |
JP6113897B1 (en) | Method for providing virtual space, method for providing virtual experience, program, and recording medium | |
JP6220937B1 (en) | Information processing method, program for causing computer to execute information processing method, and computer | |
Fadzli et al. | ARGarden: 3D outdoor landscape design using handheld augmented reality with multi-user interaction | |
KR102428438B1 (en) | Method and system for multilateral remote collaboration based on real-time coordinate sharing | |
CN116486051B (en) | Multi-user display cooperation method, device, equipment and storage medium | |
Gelšvartas et al. | Projection mapping user interface for disabled people | |
Chuah et al. | Experiences in using a smartphone as a virtual reality interaction device | |
Orlosky et al. | Effects of throughput delay on perception of robot teleoperation and head control precision in remote monitoring tasks | |
JP2018032413A (en) | Method for providing virtual space, method for providing virtual experience, program and recording medium | |
WO2018216327A1 (en) | Information processing device, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |