CN106454322A - VR Image processing system and method thereof - Google Patents
VR Image processing system and method thereof Download PDFInfo
- Publication number
- CN106454322A CN106454322A CN201610976256.5A CN201610976256A CN106454322A CN 106454322 A CN106454322 A CN 106454322A CN 201610976256 A CN201610976256 A CN 201610976256A CN 106454322 A CN106454322 A CN 106454322A
- Authority
- CN
- China
- Prior art keywords
- display device
- server end
- identification code
- rendering
- labelling
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Optics & Photonics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a VR image processing system and a method thereof. The system comprises a server side and a client; the server side renders a reality scene and transmits the rendered reality scene to display on a display device of the client; the client comprises a head-mounted display device and a mobile client; the server side comprises a virtual reality engine, a processor and a GPU connected with the processor; the VR image processing system further comprises one or more sensors for instantly acquiring position information and orientation information of the head-mounted display device, thereby tracking head movement and eyes direction; the sensor is an accelerometer, a gyroscope, a compass sensor and a geomagnetic sensor, and the acquisition frequency of the sensor is 400HZ and above; in combination with the above mentioned method, the defects that the display delay of the application is extremely high, the display rendering mechanism is unsuitable for the VR, and only sensor data is insufficient to support the complete head following function in the prior art are avoided.
Description
Technical field
The present invention relates to a kind of technical field of virtual reality, especially relate to a kind of image processing system of VR and its side
Method.
Background technology
Virtual reality, abbreviation VR technology, it is to produce a three-dimensional virtual world using computer simulation, use is provided
Person, with regard to the simulation of the sense organs such as vision, audition, tactile, allows user as on the spot in person, can in time, without limitation
Observe the things in three-dimensional space.
And the mode that renders under virtual reality technology is common processing means, render under system in current Android,
Each application can be allocated three display buffers for mating GPU rendering speed and display screen display speed, prevents picture from tearing.
Obviously, such three bufferings increased the time delay showing from GPU to last display screen.
On the other hand, different display screens also brings along different time delays, the about delay of 20ms of TFT screen,
OLED only has the delay of 5ms.
Therefore Android application is directly used in VR environment, and the display delay of application is very high, and current display renders mechanism simultaneously
It is not suitable for VR.
Traditional mobile phone application is compared in VR application, it is desirable to have telepresenc and feeling of immersion, and the key technology realized is head
Portion's servo follow-up tracing, that is, the motion with head, change rendering content in real time.Such as VR scene is a cinema, then
In the presence of following head moving, user can browse through 360 degree of full scene.
The axle of following head moving mainly has three, yaw, roll, pitch, in order to gather these data, needs to use gyro
Instrument, accelerometer, geomagnetic sensor.Frequency at least needs on 400Hz.The data only having these sensors is also not enough to support
Whole following head moving function.
Content of the invention
The technical problem to be solved is to provide a kind of image processing system of VR and its method, it is to avoid existing
The display delay having in technology an application is very high, display renders that mechanism is not particularly suited for VR, the data of only sensor is also not enough to
Support the defect of complete following head moving function.
For solving above-mentioned technical problem, the technical solution of the present invention is:
A kind of image processing system of VR, including server end and client, wherein:
Reality scene is rendered by described server end, and transmits to the display device of client display;
Described client includes head-mounted display apparatus and mobile client;
Described server end includes virtual reality engine, processor and the GPU being connected with processor;
Described mobile client is arranged on the hand-held Intelligent mobile equipment of user, and described Intelligent mobile equipment passes through wireless
Communication is interacted with server end, and described Intelligent mobile equipment is Andriod mobile phone.
The image processing system of described VR also includes one or more sensors, obtains head-mounted display apparatus for instant
Positional information and orientation information, carry out head movement and the tracking in eyes direction;
Described sensor is accelerometer, gyroscope, compass detector and geomagnetic sensor, the collection frequency of described sensor
Rate is more than 400HZ.
Described head-mounted display apparatus include:
Display screen, the image for exporting server end is shown, so that user can
To watch;
Message interface, for server end interactive information;
Processing module, for processing the information that sensor is obtained, is translated into corresponding position coordinateses and rotation four
First number information, and the result after conversion is sent to server end.
Described mobile device passes through the input instruction at interactive operation emulating server end, controls virtual scene or personage,
Server end feedback command is to mobile device.
The method of the image processing system of described VR, specific as follows:
Obtain the positional information of HMD device from sensor and orientation information carries out head tracking, and pass through message interface
By coordinate and rotation quaternary number feedback of the information to application program;Application program by the rendering module of described virtual reality engine,
Physical impacts module, audio module, nucleus module are processed to feedack, arrange coordinates matrix, by the figure processing
As described equipment is transferred to by the hardware abstraction layer of head-mounted display apparatus, the picture that user updates after eyes see feedback
Face is additionally it is possible to be accomplished by, by detecting the activity of eyes of user, the action controlling of blinking;
In addition the mode being rendered reality scene includes after the completion of CPU calculates, and the vertex position of object determines, does
Specific matrixing, transfers to GPU to render, after the completion of rendering, send display screen to show according to VSYNC information, and render
Mechanism mechanism is rendered using FBR, that is, three bufferings are changed to uniquely front buffering, GPU and display screen share same
Buffer area;
Following head moving is corrected using blending algorithm and optimizes, and corrects the change of gyroscope using accelerometer and earth magnetism
Change.
After the completion of described GPU renders, before sending display screen, ATW conversion is carried out according to current head positional information, will
Picture after conversion send display screen.
The method of the image processing system of described VR also constructs following head moving model, takes into account the position during space is set to simultaneously
Horizontalization moves.
The method of the image processing system of described VR only need to be called a matrix change and render interface, to locate in GPU layer
Reason split view, and support color applying drawing simultaneously.
The method of the image processing system of described VR is processed to accidental data using various filtering algorithms, final generation
The data of smooth steady;Described processor includes DSP, carries out sensing data is processed using DSP;The image of described VR
The method of processing system coordinates low frequency geomagnetic sensor to be applied in combination using six axle blending algorithms, utilizes prediction algorithm pre- simultaneously
The head position surveying next frame uses for rendering part.
Via the structure of the present invention, compared with prior art, it is an advantage of the current invention that:
Due to shared buffer, GPU will result directly in the change of display screen to the write of buffering, thus produce picture tearing
Split.The present invention adopts VSYNC time scheduling algorithm, according to the VSYNC time to rendering and ATW thread carries out accurate scheduling controlling,
Prevent picture from tearing.Using various filtering algorithms, accidental data is processed, the final data producing smooth steady;Can support
Complete following head moving function.
Brief description
Fig. 1 is the structural representation of the image processing system of described VR.
Fig. 2 is head rotation structural representation.
Fig. 3 is the schematic diagram of blending algorithm.
Specific embodiment
In order that the objects, technical solutions and advantages of the present invention become more apparent, below in conjunction with drawings and Examples, right
The present invention is further elaborated.It should be appreciated that specific embodiment described herein is only in order to explain the present invention, and
It is not used in the restriction present invention.
As shown in Figure 1-Figure 3, the image processing system of VR, including server end and client, wherein:
Reality scene is rendered by described server end, and transmits to the display device of client display;
Described client includes head-mounted display apparatus and mobile client;
Described server end includes virtual reality engine, processor and the GPU being connected with processor;
Described mobile client is arranged on the hand-held Intelligent mobile equipment of user, and described Intelligent mobile equipment passes through wireless
Communication is interacted with server end, and described Intelligent mobile equipment is Andriod mobile phone.
The image processing system of described VR also includes one or more sensors, obtains head-mounted display apparatus for instant
Positional information and orientation information, carry out head movement and the tracking in eyes direction;
Described sensor is accelerometer, gyroscope, compass detector and geomagnetic sensor, the collection frequency of described sensor
Rate is more than 400HZ.
Described head-mounted display apparatus include:
Display screen, the image for exporting server end is shown, so that user can
To watch;
Message interface, for server end interactive information;
Processing module, for processing the information that sensor is obtained, is translated into corresponding position coordinateses and rotation four
First number information, and the result after conversion is sent to server end.
Described mobile device passes through the input instruction at interactive operation emulating server end, controls virtual scene or personage,
Server end feedback command is to mobile device.
The method of the image processing system of described VR, specific as follows:
Obtain the positional information of HMD device from sensor and orientation information carries out head tracking, and pass through message interface
By coordinate and rotation quaternary number feedback of the information to application program;Application program by the rendering module of described virtual reality engine,
Physical impacts module, audio module, nucleus module are processed to feedack, arrange coordinates matrix, by the figure processing
As described equipment is transferred to by the hardware abstraction layer of head-mounted display apparatus, the picture that user updates after eyes see feedback
Face is additionally it is possible to be accomplished by, by detecting the activity of eyes of user, the action controlling of blinking;
In addition the mode being rendered reality scene includes after the completion of CPU calculates, and the vertex position of object determines, does
Specific matrixing, transfers to GPU to render, after the completion of rendering, send display screen to show according to VSYNC information, and render
Mechanism mechanism is rendered using FBR, that is, three bufferings are changed to uniquely front buffering, GPU and display screen share same
Buffer area.Clearly as shared buffer, GPU will result directly in the change of display screen to the write of buffering, thus producing
Picture tears.The present invention adopts VSYNC time scheduling algorithm, according to the VSYNC time to rendering and ATW thread is precisely dispatched
Control, prevent picture from tearing;
The axle of following head moving mainly has three, yaw, roll, pitch, as shown in Fig. 2 in order to gather these data, needing
Using gyroscope, accelerometer, geomagnetic sensor.Frequency at least needs on 400Hz.The data only having these sensors is also not enough
To support complete following head moving function, following head moving is corrected using blending algorithm and optimizes, blending algorithm such as Fig. 3 institute
Show, and correct the change of gyroscope using accelerometer and earth magnetism.
Present invention achieves asynchronous time warping algorithm (ATW), this algorithm mainly have of both optimize:Can drop first
The delay that low sensor produces, after the completion of described GPU renders, before sending display screen, enters according to current head positional information
Row ATW converts, and the picture after conversion is sent display screen, makes up the picture delay that GPU renders generation.Secondly, ATW can supplement
The situation that frame per second is not enough, in the case that present frame fails to render and completes, is ATW according to the rendered frame completing before and becomes
Change, thus by frame rate stable in 60FPS.
The method of the image processing system of described VR also constructs following head moving model, takes into account the position during space is set to simultaneously
Horizontalization moves, and therefore has more preferable experience.
Current VR, according to the parallax of right and left eyes, carries out color applying drawing respectively, and therefore all models and view need to calculate
Twice.But the view difference very little in fact that right and left eyes parallax produces, right and left eyes two width height is similar.Therefore optimize and adopt
Multiple view Rendering, that is, the method for the image processing system of described VR only need to call matrix change and rendering to connect
Mouthful, to process split view in GPU layer, and to support color applying drawing simultaneously.Reduce the load overhead of GPU.
Reality scene is rendered by described server end, and it is past to transmit to the display device of client the mode of display
Toward for being preset with server end and network, described network includes access and realizes device, and device is realized in described access, and to include 3G wireless
Unit, described 3G radio-cell completes 3G wireless network and accesses, and sets up 3G wireless data link, display device is with 3G wireless network
It is connected, described server end includes processor, described processor is connected with touch screen with 3G module, will by this server end
Reality scene after rendering is exactly to be communicated with 3G radio-cell by 3G module by the step that network is sent to display device
Connect, then the reality scene after rendering to be sent to display device via 3G wireless data link.
But using 3G wireless data link realize render after reality scene be sent to display device must also authenticate aobvious
Show the IP address of equipment, if the plaintext of the IP address of display device or the IP address of decoding display device out cannot be authenticated
There is close feature, the problem that usually can confuse;Coding or the palpus display device of display device must be obtained when link
Realize authentication response, server end and display device can be realized linking, then can send the reality scene after rendering, this
Step is run complicated;If the transmission of the reality scene after in addition rendering herein in connection with Zigbee module execution, must additionally increase circuit
Construction expense and the addition of corresponding control program.In addition also usually appearance is wrong even, it is single, recreational to differentiate form for display device
Not strong, the bad assurance of certification duration, authentication efficiency is low, selectivity is strong, the no not strong problem of historical record, repeatability.
In addition reality scene is rendered by server end, and transmits to the display device of client the mode of display and be
Predetermined server end and network first, described network includes access and realizes device, and described access is realized device and included 3G wirelessly list
Unit, described 3G radio-cell completes 3G wireless network and accesses, and sets up 3G wireless data link, display device is with 3G wireless network phase
Connect, described server end includes processor, described processor is connected with touch screen with 3G module, then server end is by wash with watercolours
The step that reality scene after dye is sent to display device by network is as follows:
Step 1:Identification code for display device runs after being marked and starts and sends journey to its second speedup in advance
Sequence, after server end gets the reality scene after rendering, runs and starts to the first speedup router;
Step 2:After server end is in the running status of the first speedup router, to described server end itself
Identification code execution flag, the labelling that described labelling enters with the identification code of display device is consistent, in addition also detects 3G wireless network
The identification code of interior display device, the identification code of the display device in resolution 3G wireless network is either with or without the same labelling;
Step 3:If the identification code of the display device in 3G wireless network has the same labelling, just allow described server end
Set up the reality scene after information link is come described rendering with display device and be sent to display device;
In such step, described be in the first speedup router when server end running status after, to institute
State the identification code execution flag of server end itself, allow the described server end identification code of itself carry an only labelling, thus exist
After display device also synchronously performs the second speedup router, the display that described server end detects in 3G wireless network sets
Standby identification code, differentiate 3G wireless network in display device identification code either with or without the same labelling, if 3G wireless network
The identification code of interior display device has the same labelling, just allows described server end just set up information link with display device and carrys out handle
Described render after reality scene be sent to display device;
Also have and be exactly, even if if the described server end of the present invention is not in the running status of the first speedup router, lead to
Cross 3G module and be connected with 3G radio-cell it is also possible to the synchronous identification code execution flag to described server end itself, described
The labelling that labelling enters with the identification code of display device is consistent, in addition also detects the identification code of the display device in 3G wireless network,
The identification code of the display device in resolution 3G wireless network is either with or without the same labelling;If the display device in 3G wireless network
Identification code have the same labelling, just allow described server end set up showing after information link is come described rendering with display device
Real field scape is sent to display device and the reality scene after described rendering is sent to display device.
In described step 2, the mode of the described identification code execution flag to described server end itself is:
The symbol of uniqueness and the labelling of duration are increased on the described server end identification code of itself, that is, in executing mark
When note, the symbol of uniqueness and the labelling of duration, described unique symbol are increased on the described server end identification code of itself
To arrange for executor, can be that the labelling of the identification code of same display device is used as described unique symbol using same ciphertext
Code;Then, when in addition incoherent display device runs, the connection of mistake and the problem of transmission would not occur.
The duration of described server end be labeled as representing the duration running and starting to the first speedup router, described
Being labeled as of the duration of display device represents the duration running and starting to the second speedup router;Such as 2 points 33 seconds, so
To ensure that only the described personal integrated control device in the range of duration just can be realized recognizing according to the duration scope setting further
Reality scene after rendering described in determining and sending is to display device.
After thus adopting same transform mode between described server end and display device, described server end and display
Equipment just carries the transform mode of the same symbol of uniqueness.
Also also can set on the display device of the reality scene after other need to render described in transmission in 3G wireless network
Put same described transform mode.Thus not only can ensure that information link is set up, that is, the correctness connecting, also can improve joy
Happy property.
Then the overall period sending the reality scene after rendering, executor must execution be exactly nothing but to run startup to the
One speedup router, with server end described in relief, the reality scene after described rendering to be sent to display device just permissible
It is not necessary to the IP address wanting executor's certification display device is display device, also just need not touch touch screen repeatedly
To send the concord of affirmative, described first speedup router and the second speedup router are all using task level program performing.
Furthermore achieved that speedup is sent to the effect of display device the reality scene after described rendering;In addition also without extra
Transfer framework, to realize sending, also without the channel of occupied information transmission, thereby reduces the expense of executor.
Distinguish whether the identification code detecting the display device in 3G wireless network carries described unique symbol, also
It is whether the labelling of the duration of the labelling of the duration of display device in 3G wireless network and described server end is setting
In duration category.
Via the structure of the present invention, compared with prior art, it is an advantage of the current invention that:
After server end gets the reality scene after rendering, run and start to the first speedup router;Work as service
After device end is in the running status of the first speedup router, to the identification code execution flag of described server end itself, institute
State the labelling that labelling enters with the identification code of display device consistent, in addition also detect the identification of the display device in 3G wireless network
Code, the identification code of the display device in resolution 3G wireless network is either with or without the same labelling;If the display in 3G wireless network
The identification code of equipment has the same labelling, just allows described server end set up information link with display device and comes after described rendering
Reality scene be sent to display device, thus can actively differentiate display device, realize the structure of efficient information link, and energy
Realize the reality scene after rendering described in sending according to the purpose of executor, what executor must execute is exactly nothing but to run to start
To the first speedup router, with server end described in relief, the reality scene after described rendering being sent to display device can
It is not necessary to the IP address wanting executor's certification display device is display device, also just need not touch repeatedly touch
Shield and to send the concord of affirmative, described first speedup router and the second speedup router are all held using task level program
OK.Furthermore achieved that speedup is sent to the effect of display device the reality scene after described rendering;In addition also without volume
Outer transfer framework, to realize sending, also without the channel of occupied information transmission, thereby reduces the expense of executor.In addition
When incoherent display device runs, the connection of mistake and the problem of transmission would not occur.Protected according to the duration scope setting
Card only just can realize assert and send in the described server end in the range of duration described in render after reality scene to aobvious
Show equipment.Can prevent executor from unnecessary execution occurring;Between described server end and display device, thus adopt same turning
After change mode, described server end and display device just carry as the symbol of uniqueness transform mode.
The labelling of described duration can also be for starting the duration of 3G module.
Can also distinguish whether synchronous the identification code of the display device in 3G wireless network is to carry and described server end one
The symbol of the uniqueness of sample, and distinguish the duration of the labelling of the duration of display device in 3G wireless network and described server end
Labelling whether in the duration category setting, described duration category is the random value between 3-4 minute, and duration goes beyond the scope,
Some display devices often usually can be made synchronous connected with described server end, if duration is too low, also can occur to be connected
Result.
Also having is exactly that described server end is set up the reality scene after information link is come described rendering and sent out with display device
During giving display device, manifest a button on the touchscreen, must be the reality scene after described rendering in described executor
When being sent to display device, touch this button and begin to send;It is truly realized the reality scene after overall transmission renders
Period, executor must execution be exactly nothing but run startup to the first speedup router, with server end described in relief institute
State the reality scene after rendering be sent to display device just permissible it is not necessary to want the IP address of executor's certification display device to be
It is not display device, also just repeatedly need not touch touch screen to send the concord of affirmative, so need not show on the touchscreen yet
The identifier of existing display device, therefore such send during be easy to, touch screen manifest that form is also uncomplicated, executor holds
Easily implement.
If under conditions of the information link more than one set up, just show the display device supplying to select on the touchscreen
Identification code.Allow executor make to selection.
Also have be exactly, described render after reality scene be sent to display device during, manifest a table on the touchscreen
Show the labelling of transmission information, to illustrate with this sending information now;After the reality scene transmission after described rendering terminates,
Just manifest the labelling that the labelling of an expression transmission information terminates on the touchscreen;To illustrate that transmission information is over this,
First speedup router and the second speedup router are terminated with regard to this.
After step 3 execution terminates, described server end just terminates the first speedup router, sends out terminating the first speedup
During sending program, the labelling on the identification code of server end is removed, after the synchronous identification code display device is with described rendering
Reality scene be stored in a mapping table.
So after step 3 execution terminates, described server end just terminates the first speedup router, prevents other displays
Equipment is wrong even, during terminating the first speedup router, the labelling on the identification code of server end is removed, synchronous display
The identification code of equipment is stored in a mapping table with the reality scene after described rendering, and such depositing easily executes together from now on
Transmission information when, manifest historical record to be screened for for executor.
After the identification code of described display device is stored in a mapping table with the reality scene after described rendering, in clothes
After business device end gets the reality scene after rendering again, run and start to when the first speedup router, in server
The identification code of display device listing in mapping table record on the touch screen at end is with the reality scene after rendering described in being transmitted across.
After thus just can be under the running status that some personal integrated controls are synchronously in the first speedup router, just exist
The identification code of the display device of record in mapping table is shown on touch screen with the reality scene after rendering described in being transmitted across, then
Executor can efficiently distinguish display device, more efficiently find display device, be built such that information link and be transmitted letter
Breath.
On the other hand, the method for the image processing system of described VR is needed using various filtering algorithms, accidental data to be carried out
Process, the final data producing smooth steady;Process sensing data and should not produce higher CPU overhead, therefore in some sides
In case, described processor includes DSP, carries out specially sensing data being processed using DSP;Speed due to geomagnetic sensor
Rate is limited, and the method for the image processing system of described VR coordinates low frequency geomagnetic sensor to be applied in combination using six axle blending algorithms,
Predict using prediction algorithm that the head position of next frame uses for rendering part simultaneously.
With the above-mentioned desirable embodiment according to the present invention for enlightenment, by above-mentioned description, relevant staff is complete
Entirely various change and modification can be carried out in the range of without departing from this invention technological thought.The technology of this invention
The content that property scope is not limited in description it is necessary to determine its technical scope according to right.
Claims (9)
1. a kind of image processing system of VR is it is characterised in that include server end and client, wherein:
Reality scene is rendered by described server end, and transmits to the display device of client display;
Described client includes head-mounted display apparatus and mobile client;
Described server end includes virtual reality engine, processor and the GPU being connected with processor;
Described mobile client is arranged on the hand-held Intelligent mobile equipment of user, and described Intelligent mobile equipment passes through radio communication
Interact with server end, described Intelligent mobile equipment is Andriod mobile phone.
The image processing system of described VR also includes one or more sensors, for the instant position obtaining head-mounted display apparatus
Confidence breath and orientation information, carry out head movement and the tracking in eyes direction;
Described sensor is accelerometer, gyroscope, compass detector and geomagnetic sensor, and the frequency acquisition of described sensor is
More than 400HZ.
2. the image processing system of VR according to claim 1 is it is characterised in that described head-mounted display apparatus include:
Display screen, the image for exporting server end is shown, so that user can watch;
Message interface, for server end interactive information;
Processing module, for processing the information that sensor is obtained, is translated into corresponding position coordinateses and rotation quaternary number
Information, and the result after conversion is sent to server end.
3. the image processing system of VR according to claim 1 is it is characterised in that described mobile device is grasped by interactive mode
Make the input instruction at emulating server end, control virtual scene or personage, server end feedback command is to mobile device.
4. the method for the image processing system of VR according to claim 1 is it is characterised in that specific as follows:
Obtain the positional information of HMD device from sensor and orientation information carries out head tracking, and will be sat by message interface
Mark and rotation quaternary number feedback of the information are to application program;Application program passes through the rendering module of described virtual reality engine, physics
Crash module, audio module, nucleus module are processed to feedack, arrange coordinates matrix, the image processing is led to
The hardware abstraction layer crossing head-mounted display apparatus is transferred to described equipment, the picture that user updates after eyes see feedback,
The action controlling of blinking can also be accomplished by by detecting the activity of eyes of user;
In addition the mode being rendered reality scene includes after the completion of CPU calculates, and the vertex position of object determines, does concrete
Matrixing, transfer to GPU to render, after the completion of rendering, send display screen to show according to VSYNC information, and the machine rendering
System renders mechanism using FBR, that is, three bufferings are changed to uniquely front buffering, and GPU and display screen share same block buffering
Region;
Following head moving is corrected using blending algorithm and optimizes, and corrects the change of gyroscope using accelerometer and earth magnetism.
5. the method for the image processing system of VR according to claim 4 is it is characterised in that after the completion of described GPU renders,
Before sending display screen, ATW conversion is carried out according to current head positional information, the picture after conversion is sent display screen.
6. the method for the image processing system of VR according to claim 5 is it is characterised in that the image procossing system of described VR
The method of system also constructs following head moving model, takes into account the position translation during space is set to simultaneously.
7. the method for the image processing system of VR according to claim 5 is it is characterised in that the image procossing system of described VR
The method of system only need to be called the change of matrix and render interface, to process split view in GPU layer, and supports to render simultaneously and paint
System.
8. the method for the image processing system of VR according to claim 5 is it is characterised in that the image procossing system of described VR
The method of system is processed to accidental data using various filtering algorithms, the final data producing smooth steady;
Described processor includes DSP, carries out sensing data is processed using DSP;
The method of the image processing system of described VR coordinates low frequency geomagnetic sensor to be applied in combination using six axle blending algorithms, simultaneously
Predict that using prediction algorithm the head position of next frame uses for rendering part.
9. the method for the image processing system of VR according to claim 5 is it is characterised in that other server end is by reality
Scene is rendered, and transmits to the display device of client the mode showing for predetermined server end and network first, institute
State network and include access and realize device, described access is realized device and included 3G radio-cell, and described 3G radio-cell completes 3G no
Line network insertion, sets up 3G wireless data link, and display device is connected with 3G wireless network, and described server end includes processing
Device, described processor is connected with touch screen with 3G module, and then the reality scene after rendering is sent out by server end by network
The step giving display device is as follows:
Step 1:Identification code for display device runs after being marked and starts to its second speedup router in advance, takes
After business device end gets the reality scene after rendering, run and start to the first speedup router;
Step 2:After server end is in the running status of the first speedup router, to the knowledge of of described server end itself
Other code execution flag, the labelling that described labelling enters with the identification code of display device is consistent, in addition also detects in 3G wireless network
The identification code of display device, the identification code of the display device in resolution 3G wireless network is either with or without the same labelling;
Step 3:If the identification code of the display device in 3G wireless network has the same labelling, just allow described server end with aobvious
Show that equipment is set up the reality scene after information link is come described rendering and is sent to display device;
In such step, described be in the first speedup router when server end running status after, to described clothes
The identification code execution flag at business device end itself, allows the described server end identification code of itself carry an only labelling, thus in display
After equipment also synchronously performs the second speedup router, described server end detects the display device in 3G wireless network
Identification code, differentiate 3G wireless network in display device identification code either with or without the same labelling, if in 3G wireless network
The identification code of display device has the same labelling, just allows described server end just set up information link with display device and comes described
Reality scene after rendering is sent to display device;
If even if described server end is not in the running status of the first speedup router, by 3G module and 3G radio-cell
It is connected it is also possible to the synchronous identification code execution flag to described server end itself, described labelling is with the identification of display device
The labelling that code enters is consistent, in addition also detects the identification code of the display device in 3G wireless network, differentiates aobvious in 3G wireless network
The identification code showing equipment is either with or without the same labelling;If the identification code of the display device in 3G wireless network has the same mark
Note, just allows described server end set up the reality scene after information link is come described rendering with display device and be sent to display and sets
Standby reality scene after described rendering is sent to display device;
In described step 2, the mode of the described identification code execution flag to described server end itself is:
Increase on the described server end identification code of itself uniqueness symbol and duration labelling, that is, in execution flag it
Border, increases the symbol of uniqueness and the labelling of duration on the described server end identification code of itself, and described unique symbol is to hold
Passerby, to arrange, can be that the labelling of the identification code of same display device is used as described unique symbol using same ciphertext;
Being labeled as of the duration of described server end represents the duration running and starting to the first speedup router, described display
Being labeled as of the duration of equipment represents the duration running and starting to the second speedup router;So further according to setting
Duration scope ensure only the described personal integrated control device in the range of duration just can realize assert and send described in render
Reality scene afterwards is to display device;
After thus adopting same transform mode between described server end and display device, described server end and display device
The just transform mode of the symbol with the same uniqueness;
Also can be also provided with same on the display device of the reality scene after other need to render described in transmission in 3G wireless network
The described transform mode of sample;
Described first speedup router and the second speedup router are all using task level program performing;
The labelling of described duration can also be for starting the duration of 3G module;
Can also distinguish display device in 3G wireless network identification code be synchronous carry the same with described server end
Unique symbol, and distinguish the mark of the duration of the labelling of the duration of display device in 3G wireless network and described server end
Note is whether in the duration category setting;
Described server end is set up the reality scene after information link is come described rendering and is sent to display device with display device
Period, manifest a button on the touchscreen, the reality scene after described rendering must be sent to display in described executor and set
When standby, touch this button and begin to send;
If under conditions of the information link more than one set up, just show the identification for the display device selected on the touchscreen
Code.Allow executor make to selection;
Described render after reality scene be sent to display device during, manifest on the touchscreen one expression transmission information mark
Note, to illustrate with this sending information now;After the reality scene transmission after described rendering terminates, just show on the touchscreen
The labelling that the labelling of an existing expression transmission information terminates;To illustrate that transmission information is over this, the first increasing is terminated with regard to this
Fast router and the second speedup router;
After step 3 execution terminates, described server end just terminates the first speedup router, sends journey terminating the first speedup
During sequence, the labelling on the identification code of server end is removed, the synchronous identification code display device is with showing after described rendering
Real field scape is stored in a mapping table;
After the identification code of described display device is stored in a mapping table with the reality scene after described rendering, in server
After end gets the reality scene after rendering again, run and start to when the first speedup router, in server end
The identification code of display device listing in mapping table record on touch screen is with the reality scene after rendering described in being transmitted across.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610976256.5A CN106454322A (en) | 2016-11-07 | 2016-11-07 | VR Image processing system and method thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610976256.5A CN106454322A (en) | 2016-11-07 | 2016-11-07 | VR Image processing system and method thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106454322A true CN106454322A (en) | 2017-02-22 |
Family
ID=58179599
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610976256.5A Pending CN106454322A (en) | 2016-11-07 | 2016-11-07 | VR Image processing system and method thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106454322A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107085821A (en) * | 2017-04-27 | 2017-08-22 | 郑州云海信息技术有限公司 | A kind of method of GPU BOX servers VR optimizations |
CN107197154A (en) * | 2017-06-22 | 2017-09-22 | 北京奇艺世纪科技有限公司 | A kind of panoramic video antihunt means and device |
CN107870264A (en) * | 2016-10-12 | 2018-04-03 | 杨甫进 | Electric wire tester and its method |
CN107889074A (en) * | 2017-10-20 | 2018-04-06 | 深圳市眼界科技有限公司 | Dodgem data processing method, apparatus and system for VR |
CN108694738A (en) * | 2017-04-01 | 2018-10-23 | 英特尔公司 | The multilayer of decoupling renders frequency |
CN109558243A (en) * | 2018-11-30 | 2019-04-02 | Oppo广东移动通信有限公司 | Processing method, device, storage medium and the terminal of virtual data |
CN109739356A (en) * | 2018-12-29 | 2019-05-10 | 歌尔股份有限公司 | Control method, device and the VR helmet that image is shown in VR system |
CN110574375A (en) * | 2017-04-28 | 2019-12-13 | 苹果公司 | Video pipeline |
CN110825219A (en) * | 2018-08-14 | 2020-02-21 | 三星电子株式会社 | Electronic device, control method of electronic device, and electronic system |
WO2020060984A1 (en) * | 2018-09-17 | 2020-03-26 | Qualcomm Incorporated | Cross layer traffic optimization for split xr |
CN110955046A (en) * | 2018-09-27 | 2020-04-03 | 精工爱普生株式会社 | Head-mounted display device and cover member |
WO2020078354A1 (en) * | 2018-10-16 | 2020-04-23 | 北京凌宇智控科技有限公司 | Video streaming system, video streaming method and apparatus |
CN114489538A (en) * | 2021-12-27 | 2022-05-13 | 炫彩互动网络科技有限公司 | Terminal display method of cloud game VR |
US11816820B2 (en) | 2017-07-21 | 2023-11-14 | Apple Inc. | Gaze direction-based adaptive pre-filtering of video data |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104915979A (en) * | 2014-03-10 | 2015-09-16 | 苏州天魂网络科技有限公司 | System capable of realizing immersive virtual reality across mobile platforms |
CN105323129A (en) * | 2015-12-04 | 2016-02-10 | 上海弥山多媒体科技有限公司 | Home virtual reality entertainment system |
CN105391858A (en) * | 2015-11-04 | 2016-03-09 | 深圳维爱特科技有限公司 | Virtual reality glass unit, and control method, apparatus and system thereof |
CN105850148A (en) * | 2013-12-26 | 2016-08-10 | 精工爱普生株式会社 | Video transmission and display system |
CN105898138A (en) * | 2015-12-18 | 2016-08-24 | 乐视致新电子科技(天津)有限公司 | Panoramic video play method and device |
CN105913371A (en) * | 2015-11-16 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | System optimization method for virtual reality application delay and system optimization device thereof |
-
2016
- 2016-11-07 CN CN201610976256.5A patent/CN106454322A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105850148A (en) * | 2013-12-26 | 2016-08-10 | 精工爱普生株式会社 | Video transmission and display system |
CN104915979A (en) * | 2014-03-10 | 2015-09-16 | 苏州天魂网络科技有限公司 | System capable of realizing immersive virtual reality across mobile platforms |
CN105391858A (en) * | 2015-11-04 | 2016-03-09 | 深圳维爱特科技有限公司 | Virtual reality glass unit, and control method, apparatus and system thereof |
CN105913371A (en) * | 2015-11-16 | 2016-08-31 | 乐视致新电子科技(天津)有限公司 | System optimization method for virtual reality application delay and system optimization device thereof |
CN105323129A (en) * | 2015-12-04 | 2016-02-10 | 上海弥山多媒体科技有限公司 | Home virtual reality entertainment system |
CN105898138A (en) * | 2015-12-18 | 2016-08-24 | 乐视致新电子科技(天津)有限公司 | Panoramic video play method and device |
Cited By (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107870264A (en) * | 2016-10-12 | 2018-04-03 | 杨甫进 | Electric wire tester and its method |
CN107870264B (en) * | 2016-10-12 | 2020-09-11 | 成武县晨晖环保科技有限公司 | Wire and cable tester and method thereof |
CN108694738B (en) * | 2017-04-01 | 2024-05-24 | 英特尔公司 | Decoupled multi-layer rendering frequencies |
CN108694738A (en) * | 2017-04-01 | 2018-10-23 | 英特尔公司 | The multilayer of decoupling renders frequency |
CN107085821A (en) * | 2017-04-27 | 2017-08-22 | 郑州云海信息技术有限公司 | A kind of method of GPU BOX servers VR optimizations |
US11727619B2 (en) | 2017-04-28 | 2023-08-15 | Apple Inc. | Video pipeline |
US12086919B2 (en) | 2017-04-28 | 2024-09-10 | Apple Inc. | Video pipeline |
CN110574375A (en) * | 2017-04-28 | 2019-12-13 | 苹果公司 | Video pipeline |
CN107197154A (en) * | 2017-06-22 | 2017-09-22 | 北京奇艺世纪科技有限公司 | A kind of panoramic video antihunt means and device |
US11900578B2 (en) | 2017-07-21 | 2024-02-13 | Apple Inc. | Gaze direction-based adaptive pre-filtering of video data |
US11816820B2 (en) | 2017-07-21 | 2023-11-14 | Apple Inc. | Gaze direction-based adaptive pre-filtering of video data |
CN107889074A (en) * | 2017-10-20 | 2018-04-06 | 深圳市眼界科技有限公司 | Dodgem data processing method, apparatus and system for VR |
CN110825219A (en) * | 2018-08-14 | 2020-02-21 | 三星电子株式会社 | Electronic device, control method of electronic device, and electronic system |
US11127214B2 (en) | 2018-09-17 | 2021-09-21 | Qualcomm Incorporated | Cross layer traffic optimization for split XR |
CN112673642A (en) * | 2018-09-17 | 2021-04-16 | 高通股份有限公司 | Cross-layer traffic optimization for split XR |
WO2020060984A1 (en) * | 2018-09-17 | 2020-03-26 | Qualcomm Incorporated | Cross layer traffic optimization for split xr |
CN110955046B (en) * | 2018-09-27 | 2022-02-11 | 精工爱普生株式会社 | Head-mounted display device and cover member |
CN110955046A (en) * | 2018-09-27 | 2020-04-03 | 精工爱普生株式会社 | Head-mounted display device and cover member |
WO2020078354A1 (en) * | 2018-10-16 | 2020-04-23 | 北京凌宇智控科技有限公司 | Video streaming system, video streaming method and apparatus |
US11500455B2 (en) | 2018-10-16 | 2022-11-15 | Nolo Co., Ltd. | Video streaming system, video streaming method and apparatus |
WO2020108101A1 (en) * | 2018-11-30 | 2020-06-04 | Oppo广东移动通信有限公司 | Virtual data processing method and apparatus, and storage medium, and terminal |
CN109558243A (en) * | 2018-11-30 | 2019-04-02 | Oppo广东移动通信有限公司 | Processing method, device, storage medium and the terminal of virtual data |
CN109739356A (en) * | 2018-12-29 | 2019-05-10 | 歌尔股份有限公司 | Control method, device and the VR helmet that image is shown in VR system |
CN114489538A (en) * | 2021-12-27 | 2022-05-13 | 炫彩互动网络科技有限公司 | Terminal display method of cloud game VR |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106454322A (en) | VR Image processing system and method thereof | |
CN106527713B (en) | The three-dimensional data rendering system and its method of VR | |
CN105976417B (en) | Animation generation method and device | |
US20160357491A1 (en) | Information processing apparatus, information processing method, non-transitory computer-readable storage medium, and system | |
JP5147933B2 (en) | Man-machine interface device system and method | |
KR20220088496A (en) | Porting physical objects into virtual reality | |
CN109863510A (en) | Redundancy tracking system | |
US20150062164A1 (en) | Head mounted display, method of controlling head mounted display, computer program, image display system, and information processing apparatus | |
WO2019037489A1 (en) | Map display method, apparatus, storage medium and terminal | |
CN106134186A (en) | Distant existing experience | |
CN110851095B (en) | Multi-screen interactions in virtual and augmented reality | |
US11979684B2 (en) | Content distribution device, content distribution program, content distribution method, content display device, content display program, and content display method | |
WO2016084941A1 (en) | System, program, and method for operating screen by linking display and plurality of controllers connected via network | |
JP2021081757A (en) | Information processing equipment, information processing methods, and program | |
KR20220063205A (en) | Augmented reality for setting up an internet connection | |
WO2020149270A1 (en) | Method for generating 3d object arranged in augmented reality space | |
CN116261706A (en) | System and method for object tracking using fused data | |
CN103752010B (en) | For the augmented reality covering of control device | |
JP5661835B2 (en) | Terminal device, display processing method, and display processing program | |
CN106200900A (en) | Based on identifying that the method and system that virtual reality is mutual are triggered in region in video | |
CN111488090A (en) | Interaction method, interaction device, interaction system, electronic equipment and storage medium | |
CN105824430A (en) | Three-dimensional information interaction method and wearable equipment | |
WO2022176450A1 (en) | Information processing device, information processing method, and program | |
CN106200923A (en) | The control method of a kind of virtual reality system and device | |
WO2023090163A1 (en) | Information processing device, information processing method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170222 |
|
RJ01 | Rejection of invention patent application after publication |