CN109978945A - A kind of information processing method and device of augmented reality - Google Patents
A kind of information processing method and device of augmented reality Download PDFInfo
- Publication number
- CN109978945A CN109978945A CN201910142496.9A CN201910142496A CN109978945A CN 109978945 A CN109978945 A CN 109978945A CN 201910142496 A CN201910142496 A CN 201910142496A CN 109978945 A CN109978945 A CN 109978945A
- Authority
- CN
- China
- Prior art keywords
- scene
- visual angle
- target
- target scene
- initial pictures
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
Abstract
This application discloses a kind of information processing method of augmented reality and devices.This method comprises: obtaining virtual media information, wherein virtual media information is determined by the initial pictures of target scene and the initial pictures of remote scene;Obtain the subsequent image of target scene;Determine the visual angle change between the visual angle corresponding with the initial pictures of target scene of the corresponding visual angle of the subsequent image of target scene;It is in preset range in situation in visual angle change, virtual media information is modified according to visual angle change, obtains the first amendment virtual media information;The subsequent image of first amendment virtual media information and target scene is combined, augmented reality scene for display is generated.By the application, solve AR data processing in the related technology, how in the case where not increasing extras, efficiently complete augmented reality scene data processing the technical issues of.
Description
Technical field
This application involves AR (augmented reality) field, a kind of information processing method in particular to augmented reality and
Device.
Background technique
Augmented reality AR (Augmented Reality) is that packet is added on the basis of true real world visual information
Include the technology of the virtual media information of other computers such as video, image, text, sound generation.
One important applied field of the technology is to aid in user experience current scene in physical distance or the time
On the experience that can not contact, increase or promoted perception of the user to the information in real-world scene.But AR technology may
It is required that dedicated system or hardware device, such as head-mounted display, intelligent eyes, the computer with independent display card etc., this
Certain cost or use environment are required a bit, are virtually limiting the usage scenario of AR.
In the related technology for AR data processing, how in the case where not increasing extras, enhancing is efficiently completed
The data processing of reality scene is still current major issue to be solved.
Summary of the invention
The application provides the information processing method and device of a kind of augmented reality, to solve the related skill in AR data processing
In art, how in the case where not increasing extras, efficiently the technical issues of the data processing of completion augmented reality scene.
According to the one aspect of the application, a kind of information processing method of augmented reality is provided.This method comprises: obtaining
Virtual media information, wherein virtual media information is determined by the initial pictures of target scene and the initial pictures of remote scene;It obtains
Take the subsequent image of target scene;Determine that the corresponding visual angle of the subsequent image of target scene is corresponding with the initial pictures of target scene
Visual angle between visual angle change;It is in preset range in situation in visual angle change, virtual media is believed according to visual angle change
Breath is modified, and obtains the first amendment virtual media information;By the subsequent figure of the first amendment virtual media information and target scene
As being combined, augmented reality scene for display is generated.
Further, method further include: in the case where visual angle change is more than preset range, according to visual angle change to long-range
The shooting angle of photographic device is adjusted, wherein remote shooting device is used to obtain the image information of remote scene;Pass through bat
The remote shooting device for taking the photograph the adjusted mistake of angle obtains the amendment image of remote scene;According to repairing for subsequent image and remote scene
Positive image obtains the second amendment virtual media information;The subsequent image of second amendment virtual media information and target scene is carried out
In conjunction with generating augmented reality scene for display.
Further, the shooting angle of remote shooting device is adjusted according to visual angle change and comprises determining that and remotely takes the photograph
As the corresponding multiple default shooting angle of device;According to visual angle change, target shooting angle is determined from multiple default shooting angle
Degree;The shooting angle of remote shooting device is adjusted to target shooting angle.
Further, obtain virtual media information include: obtains multiple target scenes initial pictures and at least one far
The initial pictures of journey scene, wherein the initial pictures of multiple target scenes carry out shooting processing to target scene by multiple groups camera lens
And generate;Difference between the feature that matches in initial pictures according to multiple target scenes determines the depth of target scene
Information;The initial pictures of the depth information of target scene and at least one remote scene are fitted processing, obtain virtual matchmaker
Body information.
Further, the difference between the feature that matches in the initial pictures according to multiple target scenes, determines target field
The depth information of scape includes: the feature that matches in the initial pictures for identify multiple target scenes;Determination matches feature every
Coordinate in the initial pictures of a target scene;According to seat of the feature in the initial pictures of at least two target scenes that match
The difference of mark, determines the depth information of target scene.
Further, it is determined that the corresponding visual angle of the subsequent image of target scene view corresponding with the initial pictures of target scene
Visual angle change between angle comprises determining that the corresponding visual angle of initial pictures relative to target scene, the corresponding view of subsequent image
Angle translates running parameter;Determine the corresponding visual angle of initial pictures relative to target scene, the corresponding visual angle rotation of subsequent image
Running parameter.
Further, described remote in the case where the remote shooting device obtains the amendment image of the remote scene
Standard shooting angle is contained at least one in target shooting angle used by journey photographic device, wherein standard shooting angle institute
Corresponding real world coordinates are aligned with virtual world coordinate corresponding to virtual media information.
According to the another aspect of the application, a kind of information processing unit of augmented reality is provided.The device includes: first
Acquiring unit, for obtaining virtual media information, wherein virtual media information by target scene initial pictures and remote scene
Initial pictures determine;Second acquisition unit, for obtaining the subsequent image of target scene;Determination unit, for determining target
Visual angle change between the visual angle corresponding with the initial pictures of target scene of the corresponding visual angle of the subsequent image of scene;Third obtains
Unit is modified virtual media information according to visual angle change, obtains for being in preset range in situation in visual angle change
To the first amendment virtual media information;First generation unit, for will first amendment virtual media information and target scene after
Continuous image is combined, and generates augmented reality scene for display.
According to the another aspect of the application, a kind of storage medium is provided, storage medium includes the program of storage, wherein
Program executes the information processing method of the augmented reality of above-mentioned any one.
According to the another aspect of the application, a kind of processor is provided, processor is for running program, wherein program fortune
The information processing method of the augmented reality of above-mentioned any one is executed when row.
By the application, using following steps: obtaining virtual media information, wherein virtual media information is by target scene
Initial pictures and remote scene initial pictures determine;Obtain the subsequent image of target scene;Determine the subsequent of target scene
Visual angle change between the visual angle corresponding with the initial pictures of target scene of the corresponding visual angle of image;It is in default in visual angle change
In range in situation, virtual media information is modified according to visual angle change, obtains the first amendment virtual media information;By
The subsequent image of one amendment virtual media information and target scene is combined, and is generated augmented reality scene for display, is solved
Determined AR data processing in the related technology, how in the case where not increasing extras, complete augmented reality scene
The technical issues of data processing.
That is, no longer re-starting the calculating process of depth map in the case where user terminal changes visual angle, but first sentencing
Whether the visual angle change of disconnected user terminal is in preset range, if the visual angle change of user terminal is in preset range,
Processing (translation or/and rotation) directly is modified to previously determined virtual media information, obtains the first amendment virtual media
Information, and then the subsequent image that the first amendment virtual media information and user terminal obtain in real time is combined processing, it obtains
Augmented reality scene for display.
That is, having reached in the case where not increasing extras, still can efficiently complete at the data of augmented reality scene
Reason, avoids in the case where user terminal changes visual angle every time, requires again to carry out the updated subsequent image in visual angle
The case where depth information calculation processing, occurs, and has saved the processing time of a large amount of depth informations.
Detailed description of the invention
The attached drawing constituted part of this application is used to provide further understanding of the present application, the schematic reality of the application
Example and its explanation are applied for explaining the application, is not constituted an undue limitation on the present application.In the accompanying drawings:
Fig. 1 is the flow chart of the information processing method of the augmented reality provided according to the application first embodiment;
Fig. 2 is the flow chart of the information processing method of the augmented reality provided according to the application second embodiment;And
Fig. 3 is the schematic diagram according to the information processing unit of augmented reality provided by the embodiments of the present application.
Specific embodiment
It should be noted that in the absence of conflict, the features in the embodiments and the embodiments of the present application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
In order to make those skilled in the art more fully understand application scheme, below in conjunction in the embodiment of the present application
Attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is only
The embodiment of the application a part, instead of all the embodiments.Based on the embodiment in the application, ordinary skill people
Member's every other embodiment obtained without making creative work, all should belong to the model of the application protection
It encloses.
It should be noted that the description and claims of this application and term " first " in above-mentioned attached drawing, "
Two " etc. be to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should be understood that using in this way
Data be interchangeable under appropriate circumstances, so as to embodiments herein described herein.In addition, term " includes " and " tool
Have " and their any deformation, it is intended that cover it is non-exclusive include, for example, containing a series of steps or units
Process, method, system, product or equipment those of are not necessarily limited to be clearly listed step or unit, but may include without clear
Other step or units listing to Chu or intrinsic for these process, methods, product or equipment.
According to the first embodiment of the application, a kind of information processing method of augmented reality is provided.
Fig. 1 is the flow chart according to the information processing method of the augmented reality of the application first embodiment.As shown in Figure 1,
Method includes the following steps:
Step S102 obtains virtual media information, wherein virtual media information is by the initial pictures of target scene and long-range
The initial pictures of scene determine.
Step S104 obtains the subsequent image of target scene.
Step S106 determines the corresponding visual angle of the subsequent image of target scene view corresponding with the initial pictures of target scene
Visual angle change between angle.
Step S108 is in preset range in situation in visual angle change, according to visual angle change to virtual media information into
Row amendment, obtains the first amendment virtual media information.
The subsequent image of first amendment virtual media information and target scene is combined by step S110, and generation is used for
The augmented reality scene of displaying.
The information processing method for the augmented reality that the application first embodiment provides, by obtaining virtual media information,
In, virtual media information is determined by the initial pictures of target scene and the initial pictures of remote scene;After obtaining target scene
Continuous image;Determine the view between the visual angle corresponding with the initial pictures of target scene of the corresponding visual angle of the subsequent image of target scene
Angle variation;It is in preset range in situation in visual angle change, virtual media information is modified according to visual angle change, is obtained
First amendment virtual media information;The subsequent image of first amendment virtual media information and target scene is combined, is generated
Augmented reality scene for display, solve AR data processing in the related technology, how not increase extras
In the case of, complete augmented reality scene data processing the technical issues of.
That is, no longer re-starting the calculating process of depth map in the case where user terminal changes visual angle, but first sentencing
Whether the visual angle change of disconnected user terminal is in preset range, if the visual angle change of user terminal is in preset range,
Processing (translation or/and rotation) directly is modified to previously determined virtual media information, obtains the first amendment virtual media
Information, and then the subsequent image that the first amendment virtual media information and user terminal obtain in real time is combined processing, it obtains
Augmented reality scene for display.
That is, having reached in the case where not increasing extras, still can efficiently complete at the data of augmented reality scene
Reason, avoids in the case where user terminal changes visual angle every time, requires again to carry out the updated subsequent image in visual angle
The case where depth information calculation processing, occurs, and has saved the processing time of a large amount of depth informations.
In addition, the second embodiment of the application also provides a kind of information processing method of augmented reality.
Fig. 2 is the flow chart according to the information processing method of the augmented reality of the application second embodiment.As shown in Fig. 2,
Method further includes following steps:
Step S202, in the case where visual angle change is more than preset range, according to visual angle change to remote shooting device
Shooting angle is adjusted, wherein remote shooting device is used to obtain the image information of remote scene.
Step S204 obtains the amendment image of remote scene by the remote shooting device of the adjusted mistake of shooting angle.
Step S206, the amendment image according to subsequent image and remote scene obtain the second amendment virtual media information.
The subsequent image of second amendment virtual media information and target scene is combined by step S208, and generation is used for
The augmented reality scene of displaying.
That is, in the case where user terminal changes visual angle, at the calculating that not only simply re-starts depth map
Reason, but first judge whether the visual angle change of user terminal is in preset range, if the visual angle change of user terminal is more than pre-
If in range, then the shooting angle according to the visual angle change remote control and regulation remote shooting device of user terminal, and according to shooting angle
The remote shooting device for spending adjusted mistake reacquires the amendment image of remote scene, further to obtain the second amendment void
Quasi- media information obtains at this point, the subsequent image of the second amendment virtual media information and target scene is combined processing again
Augmented reality scene for display.It avoids in the case where the visual angle change of user terminal is excessive, only in accordance with visual angle change
Processing is modified to virtual media information, the situation for causing correction effect undesirable occurs.
In the 3rd embodiment of the application, reality can be able to as follows by obtaining virtual media information (step S102)
Existing: step A1 obtains the initial pictures of multiple target scenes and the initial pictures of at least one remote scene, wherein multiple mesh
The initial pictures for marking scene are carried out shooting processing to target scene by multiple groups camera lens and are generated;Step A2, according to multiple targets
Difference between the feature that matches in the initial pictures of scene determines the depth information of target scene;Step A3, by target scene
Depth information and the initial pictures of at least one remote scene be fitted processing, obtain virtual media information.
Difference in the fourth embodiment of the application, between the feature that matches in the initial pictures according to multiple target scenes
Not, the depth information (step A2) for determining target scene includes: the spy that matches in the initial pictures for identify multiple target scenes
Sign;Determine coordinate of the feature in the initial pictures of each target scene that match;Foundation matches feature at least two mesh
The difference for marking the coordinate in the initial pictures of scene, determines the depth information of target scene.
Further, the initial pictures of above-mentioned multiple target scenes be by user terminal two groups of camera lenses to target scene into
Row shooting and obtain, wherein the field angle of two groups of camera lenses of above-mentioned user terminal is identical, and (such as: the field angle of two groups of camera lenses is equal
It is 60 °, 80 °, 100 ° etc.).That is, two groups of camera lenses by user terminal carry out shooting processing to target scene, two groups are obtained
The initial pictures (initial pictures of multiple target scenes) of target scene, and then according to binocular distance measuring method to above-mentioned two groups of targets
The initial pictures of scene carry out calculation processing, obtain the depth map (depth information) of target scene.
It should be understood that user terminal obtain target scene initial pictures when, the translational velocity of user terminal
And/or rotation acceleration obtains the subsequent image Shi Geng great of target scene relative to user terminal.For example, user terminal is obtaining
When the initial pictures of target scene, which remains static, and user terminal is in the subsequent figure for obtaining target scene
When picture, which is in moving condition.
For example: the principle of the depth map of target scene is determined according to binocular distance measuring method are as follows: ideally, two
In picture captured by group camera lens, identical scenery will be located on identical shooting level line;Rather than ideally, two groups of camera lenses
In captured picture, identical scenery will be located on different shooting level lines, at this point, then needing to carry out image by pre-stored data
Correction, is equivalent to the coaxial coplanar ideal situation of two groups of camera lenses to convert image to.After this, then in shooting level line
It is upper to search for the feature that matches, and the coordinate difference L1-L2 of the feature on two groups of camera lenses respectively alleged images that match is calculated, into one
Step ground carries out calculation processing to the coordinate difference, to obtain the depth information of target scene correspondence image (to obtain target scene
Depth map, wherein depth map is for being inserted into the corresponding 3D model of target object in remote scene).
The above-mentioned calculation formula that coordinate difference is calculated are as follows: z=EFL [1+ (B1/L1-L2)], wherein EFL is focal length,
B1 is proportionate relationship.
In the 5th embodiment of the application, the initial of the corresponding visual angle of the subsequent image of target scene and target scene is determined
Visual angle change (step S106) between the corresponding visual angle of image can be achieved as follows: step B, at user's end
The case where holding the translation and/or rotation in the case where mobile and/or rotation occurs, according to user terminal, determines target scene
Visual angle change between the visual angle corresponding with the initial pictures of target scene of the corresponding visual angle of subsequent image, wherein user terminal
Corresponding two groups of camera lenses are for carrying out shooting processing to target scene, to obtain the initial pictures and target scene of target scene
Subsequent image.
Similarly, in the sixth embodiment of the application, the corresponding visual angle of the subsequent image of target scene and target scene are determined
The corresponding visual angle of initial pictures between visual angle change (step S106) can also be achieved as follows: step
C1 determines that the corresponding visual angle of initial pictures relative to target scene, the corresponding visual angle of subsequent image translate running parameter;Step
C2 determines the corresponding visual angle of initial pictures relative to target scene, the rotationally-varying parameter in the corresponding visual angle of subsequent image.
That is, between the visual angle corresponding with the initial pictures of target scene of the corresponding visual angle of the subsequent image of target scene
Visual angle change relates generally to two running parameters, i.e. visual angle translation running parameter and the rotationally-varying parameter in visual angle.
In the 7th embodiment of the application, virtual media information is modified according to visual angle change, obtains the first amendment
Virtual media information (step S108) can be achieved as follows: the corresponding view of subsequent image according to target scene
Visual angle change between the visual angle corresponding with the initial pictures of target scene of angle carries out at pose transformation virtual media information
Reason.For example, if the corresponding visual angle of initial pictures relative to target scene, the corresponding visual angle of the subsequent image of target scene is to pre-
If the Y direction of coordinate system translates 10cm, then virtual media information also translates 10cm to the Y direction of preset coordinate system;If phase
Visual angle corresponding for the initial pictures of target scene, the corresponding visual angle of the subsequent image of target scene is with the Y of preset coordinate system
Axis is standard, counterclockwise 60 ° of rotation, then virtual media information is also using the Y-axis of preset coordinate system as standard, 60 ° of rotation counterclockwise.
In the 8th embodiment of the application, packet is adjusted to the shooting angle of remote shooting device according to visual angle change
It includes: determining the corresponding multiple default shooting angle of remote shooting device;According to visual angle change, from multiple default shooting angle really
Set the goal shooting angle;The shooting angle of remote shooting device is adjusted to target shooting angle.
In an optional example, remote shooting device can be at least one removable camera, or more
A fixing camera.In the case where remote shooting device is multiple fixing cameras, by the shooting angle of remote shooting device
Being adjusted to target shooting angle can be with are as follows: closes the fixing camera of current shooting angle, opens consolidating for target shooting angle
Determine camera, wherein the fixing camera of target shooting angle can be multiple.
It should be understood that multiple fixing cameras are discrete in the case where remote photographic device is multiple fixing cameras
It is distributed in remote scene, and the shooting angle of multiple fixing cameras is mutually perpendicular to.
In the 9th embodiment of the application, the case where remote shooting device obtains the amendment image of the remote scene
Under, standard shooting angle is contained at least one in target shooting angle used by the remote shooting device, wherein the standard is clapped
Real world coordinates corresponding to angle are taken the photograph to be aligned with the virtual world coordinate of the virtual media information.
That is, remote shooting device can also reset back to initial position except the amendment image for obtaining remote scene, such as
Relative to gravity direction, by the shooting of all cameras be moved to horizontal position so as to the Virtual Space that is produced according to depth information into
Row alignment.
It is now carried out for above-described embodiment further for example:
In an optional example, user passes through positioned at the user terminal of target scene A and in remote scene B
Target object (personage/article) carries out video, arranges multiple remote shooting devices around the target object in remote scene B,
In, above-mentioned multiple remote shooting devices maintain communication connection with user terminal, the image information transmission taken
To user terminal.
At this point, user terminal/director server to from remote scene acquired image information carry out building processing, with
The corresponding 3D model of target object into remote scene B.User terminal establishes target scene A's by its corresponding two groups of camera lens
Depth map, and then the depth map based on target scene A establishes the corresponding 3D map of target scene A.Further, by remote field
The corresponding 3D model of target object is shown in the corresponding 3D map of target scene A in scape B.
Further, in the case where slight movement or fine rotation only occurs in user terminal, user terminal is not necessarily to from remote
New image information is obtained at remote shooting device in journey scenario B, it is only necessary to what the sensor according to user terminal was sensed
Mobile or rotation, is moved or is rotated accordingly to the corresponding 3D model of target object in remote scene B.
Further, when variation by a relatively large margin occurs for the angle lens of user terminal, user terminal is then again from remote
New image information is obtained at remote shooting device in journey scenario B, wherein the new image information is by changing shooting side
Remote shooting device after position/shooting angle is shot and is obtained.And then change in user terminal due to various movements
In the case where coverage or angle, the corresponding 3D model of target object and the corresponding 3D map of target scene A in remote scene B
(augmented reality scene) remains to be adapted therewith.
It should be noted that step shown in the flowchart of the accompanying drawings can be in such as a group of computer-executable instructions
It is executed in computer system, although also, logical order is shown in flow charts, and it in some cases, can be with not
The sequence being same as herein executes shown or described step.
The tenth embodiment of the application additionally provides a kind of information processing unit of augmented reality, it should be noted that this Shen
Please the information processing unit of augmented reality of the tenth embodiment can be used for executing provided by the embodiment of the present application for enhancing
The information processing method of reality.The information processing unit of the augmented reality provided below the tenth embodiment of the application is situated between
It continues.
Fig. 3 is the schematic diagram according to the information processing unit of the augmented reality of the tenth embodiment of the application.As shown in figure 3,
The device includes: first acquisition unit, second acquisition unit, determination unit, third acquiring unit and the first generation unit.
First acquisition unit, for obtaining virtual media information, wherein virtual media information by target scene initial graph
The initial pictures of picture and remote scene determine;
Second acquisition unit, for obtaining the subsequent image of target scene;
Determination unit, the corresponding visual angle of subsequent image for determining target scene are corresponding with the initial pictures of target scene
Visual angle between visual angle change;
Third acquiring unit, for being in preset range in situation in visual angle change, according to visual angle change to virtual matchmaker
Body information is modified, and obtains the first amendment virtual media information;
First generation unit, for the subsequent image of the first amendment virtual media information and target scene to be combined,
Generate augmented reality scene for display.
The information processing unit for the augmented reality that the tenth embodiment of the application provides is obtained virtual by first acquisition unit
Media information, wherein virtual media information is determined by the initial pictures of target scene and the initial pictures of remote scene;Second obtains
Unit is taken to obtain the subsequent image of target scene;Determination unit determines the corresponding visual angle of the subsequent image of target scene and target field
Visual angle change between the corresponding visual angle of the initial pictures of scape;Third acquiring unit is in situation in preset range in visual angle change
Under, virtual media information is modified according to visual angle change, obtains the first amendment virtual media information;First generation unit will
The subsequent image of first amendment virtual media information and target scene is combined, and generates augmented reality scene for display,
Solve AR data processing in the related technology, how in the case where not increasing extras, complete augmented reality scene
Data processing the technical issues of.
That is, no longer re-starting the calculating process of depth map in the case where user terminal changes visual angle, but first sentencing
Whether the visual angle change of disconnected user terminal is in preset range, if the visual angle change of user terminal is in preset range,
Processing (translation or/and rotation) directly is modified to previously determined virtual media information, obtains the first amendment virtual media
Information, and then the subsequent image that the first amendment virtual media information and user terminal obtain in real time is combined processing, it obtains
Augmented reality scene for display.
That is, having reached in the case where not increasing extras, still can efficiently complete at the data of augmented reality scene
Reason, avoids in the case where user terminal changes visual angle every time, requires again to carry out the updated subsequent image in visual angle
The case where depth information calculation processing, occurs, and has saved the processing time of a large amount of depth informations.
In the 11st embodiment of the application, device further include: adjustment unit, for being more than preset range in visual angle change
In the case where, the shooting angle of remote shooting device is adjusted according to visual angle change, wherein remote shooting device is for obtaining
Take the image information of remote scene;4th acquiring unit, for being obtained by the remote shooting device of the adjusted mistake of shooting angle
The amendment image of remote scene;5th acquiring unit, for obtaining second according to the amendment image of subsequent image and remote scene
Correct virtual media information;Second generation unit, for correcting the subsequent image of virtual media information and target scene by second
It is combined, generates augmented reality scene for display.
In the 12nd embodiment of the application, adjustment unit includes: the first determining module, for determining remote shooting device
Corresponding multiple default shooting angle;Second determining module, for being determined from multiple default shooting angle according to visual angle change
Target shooting angle;Module is adjusted, for the shooting angle of remote shooting device to be adjusted to target shooting angle.
In the 13rd embodiment of the application, first acquisition unit includes: the first acquisition module, for obtaining multiple targets
The initial pictures of scene and the initial pictures of at least one remote scene, wherein the initial pictures of multiple target scenes are by multiple groups
Camera lens carries out shooting processing to target scene and generates;Third determining module, for the initial graph according to multiple target scenes
Difference between the feature that matches as in, determines the depth information of target scene;Second obtains module, for by target scene
The initial pictures of depth information and at least one remote scene are fitted processing, obtain virtual media information.
In the 14th embodiment of the application, third determining module includes: identification submodule, for identification multiple target fields
The feature that matches in the initial pictures of scape;First determines submodule, matches feature in each target scene for determination
Coordinate in initial pictures;Second determines submodule, for according to initial graph of the feature at least two target scenes that match
The difference of coordinate as in, determines the depth information of target scene.
In the 15th embodiment of the application, determination unit includes: the 4th determining module, for determining relative to target field
The corresponding visual angle of the initial pictures of scape, the corresponding visual angle of subsequent image translate running parameter;5th determining module, for determining phase
Visual angle corresponding for the initial pictures of target scene, the rotationally-varying parameter in the corresponding visual angle of subsequent image.
In the 16th embodiment of the application, the amendment image of the remote scene is obtained in the remote shooting device
In the case of, standard shooting angle is contained at least one in target shooting angle used by the remote shooting device, wherein mark
Real world coordinates corresponding to quasi- shooting angle are aligned with virtual world coordinate corresponding to virtual media information.
In the 17th embodiment of the application, the information processing unit of augmented reality includes processor and memory, above-mentioned
First acquisition unit, second acquisition unit, determination unit, third acquiring unit and first generation unit etc. are used as program unit
Storage in memory, executes above procedure unit stored in memory by processor to realize corresponding function.
Include kernel in processor, is gone in memory to transfer corresponding program unit by kernel.Kernel can be set one
Or more, reach by adjusting kernel parameter in the case where not increasing extras, still can efficiently complete augmented reality
The data processing of scene is avoided in the case where user terminal changes visual angle every time, is required again updated to visual angle
The case where subsequent image progress depth information calculation processing, occurs, and has saved the processing time of a large amount of depth informations.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/
Or the forms such as Nonvolatile memory, if read-only memory (ROM) or flash memory (flash RAM), memory include that at least one is deposited
Store up chip.
In the 18th embodiment of the application, the embodiment of the invention provides a kind of storage mediums, are stored thereon with program,
The information processing method of augmented reality is realized when the program is executed by processor.
In the 19th embodiment of the application, the embodiment of the invention provides a kind of processor, processor is for running journey
Sequence, wherein the information processing method of augmented reality is executed when program is run.
In the 20th embodiment of the application, the embodiment of the invention provides a kind of equipment, equipment includes processor, storage
On a memory and the program that can run on a processor, when processor execution program, performs the steps of acquisition for device and storage
Virtual media information, wherein virtual media information is determined by the initial pictures of target scene and the initial pictures of remote scene;It obtains
Take the subsequent image of target scene;Determine that the corresponding visual angle of the subsequent image of target scene is corresponding with the initial pictures of target scene
Visual angle between visual angle change;It is in preset range in situation in visual angle change, virtual media is believed according to visual angle change
Breath is modified, and obtains the first amendment virtual media information;By the subsequent figure of the first amendment virtual media information and target scene
As being combined, augmented reality scene for display is generated.
Further, method further include: in the case where visual angle change is more than preset range, according to visual angle change to long-range
The shooting angle of photographic device is adjusted, wherein remote shooting device is used to obtain the image information of remote scene;Pass through bat
The remote shooting device for taking the photograph the adjusted mistake of angle obtains the amendment image of remote scene;According to repairing for subsequent image and remote scene
Positive image obtains the second amendment virtual media information;The subsequent image of second amendment virtual media information and target scene is carried out
In conjunction with generating augmented reality scene for display.
Further, the shooting angle of remote shooting device is adjusted according to visual angle change and comprises determining that and remotely takes the photograph
As the corresponding multiple default shooting angle of device;According to visual angle change, target shooting angle is determined from multiple default shooting angle
Degree;The shooting angle of remote shooting device is adjusted to target shooting angle.
Further, obtain virtual media information include: obtains multiple target scenes initial pictures and at least one far
The initial pictures of journey scene, wherein the initial pictures of multiple target scenes carry out shooting processing to target scene by multiple groups camera lens
And generate;Difference between the feature that matches in initial pictures according to multiple target scenes determines the depth of target scene
Information;The initial pictures of the depth information of target scene and at least one remote scene are fitted processing, obtain virtual matchmaker
Body information.
Further, the difference between the feature that matches in the initial pictures according to multiple target scenes, determines target field
The depth information of scape includes: the feature that matches in the initial pictures for identify multiple target scenes;Determination matches feature every
Coordinate in the initial pictures of a target scene;According to seat of the feature in the initial pictures of at least two target scenes that match
The difference of mark, determines the depth information of target scene.
Further, it is determined that the corresponding visual angle of the subsequent image of target scene view corresponding with the initial pictures of target scene
Visual angle change between angle comprises determining that the corresponding visual angle of initial pictures relative to target scene, the corresponding view of subsequent image
Angle translates running parameter;Determine the corresponding visual angle of initial pictures relative to target scene, the corresponding visual angle rotation of subsequent image
Running parameter.
Further, described remote in the case where the remote shooting device obtains the amendment image of the remote scene
Standard shooting angle is contained at least one in target shooting angle used by journey photographic device, wherein standard shooting angle institute
Corresponding real world coordinates are aligned with virtual world coordinate corresponding to virtual media information.Equipment herein can be
Server, PC, PAD, mobile phone etc..
In the 21st embodiment of the application, present invention also provides a kind of computer program products, when at data
In reason equipment when executing, it is adapted for carrying out initialization there are as below methods the program of step: obtaining virtual media information, wherein virtual
Media information is determined by the initial pictures of target scene and the initial pictures of remote scene;Obtain the subsequent image of target scene;
Determine the visual angle change between the visual angle corresponding with the initial pictures of target scene of the corresponding visual angle of the subsequent image of target scene;
It is in preset range in situation in visual angle change, virtual media information is modified according to visual angle change, first is obtained and repairs
Just virtual media information;The subsequent image of first amendment virtual media information and target scene is combined, is generated for opening up
The augmented reality scene shown.
Further, method further include: in the case where visual angle change is more than preset range, according to visual angle change to long-range
The shooting angle of photographic device is adjusted, wherein remote shooting device is used to obtain the image information of remote scene;Pass through bat
The remote shooting device for taking the photograph the adjusted mistake of angle obtains the amendment image of remote scene;According to repairing for subsequent image and remote scene
Positive image obtains the second amendment virtual media information;The subsequent image of second amendment virtual media information and target scene is carried out
In conjunction with generating augmented reality scene for display.
Further, the shooting angle of remote shooting device is adjusted according to visual angle change and comprises determining that and remotely takes the photograph
As the corresponding multiple default shooting angle of device;According to visual angle change, target shooting angle is determined from multiple default shooting angle
Degree;The shooting angle of remote shooting device is adjusted to target shooting angle.
Further, obtain virtual media information include: obtains multiple target scenes initial pictures and at least one far
The initial pictures of journey scene, wherein the initial pictures of multiple target scenes carry out shooting processing to target scene by multiple groups camera lens
And generate;Difference between the feature that matches in initial pictures according to multiple target scenes determines the depth of target scene
Information;The initial pictures of the depth information of target scene and at least one remote scene are fitted processing, obtain virtual matchmaker
Body information.
Further, the difference between the feature that matches in the initial pictures according to multiple target scenes, determines target field
The depth information of scape includes: the feature that matches in the initial pictures for identify multiple target scenes;Determination matches feature every
Coordinate in the initial pictures of a target scene;According to seat of the feature in the initial pictures of at least two target scenes that match
The difference of mark, determines the depth information of target scene.
Further, it is determined that the corresponding visual angle of the subsequent image of target scene view corresponding with the initial pictures of target scene
Visual angle change between angle comprises determining that the corresponding visual angle of initial pictures relative to target scene, the corresponding view of subsequent image
Angle translates running parameter;Determine the corresponding visual angle of initial pictures relative to target scene, the corresponding visual angle rotation of subsequent image
Running parameter.
Further, described remote in the case where the remote shooting device obtains the amendment image of the remote scene
Standard shooting angle is contained at least one in target shooting angle used by journey photographic device, wherein standard shooting angle institute
Corresponding real world coordinates are aligned with virtual world coordinate corresponding to virtual media information.
It should be understood by those skilled in the art that, embodiments herein can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the application
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the application, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
The application is referring to method, the process of equipment (system) and computer program product according to the embodiment of the present application
Figure and/or block diagram describe.It should be understood that every one stream in flowchart and/or the block diagram can be realized by computer program instructions
The combination of process and/or box in journey and/or box and flowchart and/or the block diagram.It can provide these computer programs
Instruct the processor of general purpose computer, special purpose computer, Embedded Processor or other programmable data processing devices to produce
A raw machine, so that being generated by the instruction that computer or the processor of other programmable data processing devices execute for real
The device for the function of being specified in present one or more flows of the flowchart and/or one or more blocks of the block diagram.
These computer program instructions, which may also be stored in, is able to guide computer or other programmable data processing devices with spy
Determine in the computer-readable memory that mode works, so that it includes referring to that instruction stored in the computer readable memory, which generates,
Enable the manufacture of device, the command device realize in one box of one or more flows of the flowchart and/or block diagram or
The function of being specified in multiple boxes.
These computer program instructions also can be loaded onto a computer or other programmable data processing device, so that counting
Series of operation steps are executed on calculation machine or other programmable devices to generate computer implemented processing, thus in computer or
The instruction executed on other programmable devices is provided for realizing in one or more flows of the flowchart and/or block diagram one
The step of function of being specified in a box or multiple boxes.
In a typical configuration, calculating equipment includes one or more processors (CPU), input/output interface, net
Network interface and memory.
Memory may include the non-volatile memory in computer-readable medium, random access memory (RAM) and/
Or the forms such as Nonvolatile memory, such as read-only memory (ROM) or flash memory (flash RAM).Memory is computer-readable Jie
The example of matter.
Computer-readable medium includes permanent and non-permanent, removable and non-removable media can be by any method
Or technology come realize information store.Information can be computer readable instructions, data structure, the module of program or other data.
The example of the storage medium of computer includes, but are not limited to phase change memory (PRAM), static random access memory (SRAM), moves
State random access memory (DRAM), other kinds of random access memory (RAM), read-only memory (ROM), electric erasable
Programmable read only memory (EEPROM), flash memory or other memory techniques, read-only disc read only memory (CD-ROM) (CD-ROM),
Digital versatile disc (DVD) or other optical storage, magnetic cassettes, tape magnetic disk storage or other magnetic storage devices
Or any other non-transmission medium, can be used for storage can be accessed by a computing device information.As defined in this article, it calculates
Machine readable medium does not include temporary computer readable media (transitory media), such as the data-signal and carrier wave of modulation.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability
It include so that the process, method, commodity or the equipment that include a series of elements not only include those elements, but also to wrap
Include other elements that are not explicitly listed, or further include for this process, method, commodity or equipment intrinsic want
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including element
There is also other identical elements in process, method, commodity or equipment.
It will be understood by those skilled in the art that embodiments herein can provide as method, system or computer program product.
Therefore, complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the application
Form.It is deposited moreover, the application can be used to can be used in the computer that one or more wherein includes computer usable program code
The shape for the computer program product implemented on storage media (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.)
Formula.
The above is only embodiments herein, are not intended to limit this application.To those skilled in the art,
Various changes and changes are possible in this application.It is all within the spirit and principles of the present application made by any modification, equivalent replacement,
Improve etc., it should be included within the scope of the claims of this application.
Claims (10)
1. a kind of information processing method of augmented reality characterized by comprising
Obtain virtual media information, wherein the virtual media information by target scene initial pictures and remote scene just
Beginning image determines;
Obtain the subsequent image of the target scene;
Determine the corresponding visual angle of the subsequent image of target scene visual angle corresponding with the initial pictures of the target scene it
Between visual angle change;
It is in preset range in situation in the visual angle change, the virtual media information is carried out according to the visual angle change
Amendment, obtains the first amendment virtual media information;
The first amendment virtual media information and the subsequent image of the target scene are combined, generated for display
Augmented reality scene.
2. the method according to claim 1, wherein the method also includes:
In the case where the visual angle change is more than preset range, according to the visual angle change to the shooting angle of remote shooting device
Degree is adjusted, wherein the remote shooting device is used to obtain the image information of the remote scene;
The amendment image of the remote scene is obtained by the remote shooting device of the adjusted mistake of shooting angle;
Amendment image according to the subsequent image and the remote scene obtains the second amendment virtual media information;
The second amendment virtual media information and the subsequent image of the target scene are combined, generated for display
Augmented reality scene.
3. according to the method described in claim 2, it is characterized in that, shooting according to the visual angle change to remote shooting device
Angle, which is adjusted, includes:
Determine the corresponding multiple default shooting angle of the remote shooting device;
According to the visual angle change, target shooting angle is determined from the multiple default shooting angle;
The shooting angle of the remote shooting device is adjusted to target shooting angle.
4. method according to claim 1 or 2, which is characterized in that the acquisition virtual media information includes:
Obtain the initial pictures of multiple target scenes and the initial pictures of at least one remote scene, wherein multiple
The initial pictures of the target scene are carried out shooting processing to target scene by multiple groups camera lens and are generated;
Difference between the feature that matches in initial pictures according to multiple target scenes, determines the depth of the target scene
Spend information;
The initial pictures of the depth information of the target scene and at least one remote scene are fitted processing, are obtained virtual
Media information.
5. according to the method described in claim 4, it is characterized in that, according to phase in the initial pictures of multiple target scenes
With the difference between feature, determine that the depth information of the target scene includes:
Identify the feature that matches in the initial pictures of multiple target scenes;
Coordinate of the feature that matches described in determination in the initial pictures of each target scene;
Match the difference of coordinate of the feature in the initial pictures of at least two target scenes according to described in, determines the target field
The depth information of scape.
6. method according to claim 1 or 2, which is characterized in that determine that the subsequent image of the target scene is corresponding
Visual angle change between the visual angle corresponding with the initial pictures of the target scene of visual angle includes:
Determine the corresponding visual angle of initial pictures relative to the target scene, the corresponding visual angle translation variation of the subsequent image
Parameter;
Determine the corresponding visual angle of initial pictures relative to the target scene, the corresponding visual angle of the subsequent image is rotationally-varying
Parameter.
7. according to the method described in claim 3, it is characterized in that, obtaining the remote scene in the remote shooting device
In the case where correcting image, standard shooting angle is contained at least one in target shooting angle used by the remote shooting device
Degree, wherein virtual generation corresponding to real world coordinates corresponding to the standard shooting angle and the virtual media information
Boundary's coordinate aligns.
8. a kind of information processing unit of augmented reality characterized by comprising
First acquisition unit, for obtaining virtual media information, wherein the virtual media information by target scene initial graph
The initial pictures of picture and remote scene determine;
Second acquisition unit, for obtaining the subsequent image of the target scene;
Determination unit, the initial pictures at subsequent image corresponding visual angle and the target scene for determining the target scene
Visual angle change between corresponding visual angle;
Third acquiring unit, for being in the visual angle change in preset range in situation, according to the visual angle change to institute
It states virtual media information to be modified, obtains the first amendment virtual media information;
First generation unit, for tying the first amendment virtual media information and the subsequent image of the target scene
It closes, generates augmented reality scene for display.
9. a kind of storage medium, which is characterized in that the storage medium includes the program of storage, wherein described program right of execution
Benefit require any one of 1 to 7 described in augmented reality information processing method.
10. a kind of processor, which is characterized in that the processor is for running program, wherein right of execution when described program is run
Benefit require any one of 1 to 7 described in augmented reality information processing method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910142496.9A CN109978945B (en) | 2019-02-26 | 2019-02-26 | Augmented reality information processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910142496.9A CN109978945B (en) | 2019-02-26 | 2019-02-26 | Augmented reality information processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109978945A true CN109978945A (en) | 2019-07-05 |
CN109978945B CN109978945B (en) | 2021-08-31 |
Family
ID=67077448
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910142496.9A Active CN109978945B (en) | 2019-02-26 | 2019-02-26 | Augmented reality information processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109978945B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111881861A (en) * | 2020-07-31 | 2020-11-03 | 北京市商汤科技开发有限公司 | Display method, device, equipment and storage medium |
WO2021097803A1 (en) * | 2019-11-22 | 2021-05-27 | 北京小米移动软件有限公司 | Resource switching method and apparatus and storage medium |
CN113220251A (en) * | 2021-05-18 | 2021-08-06 | 北京达佳互联信息技术有限公司 | Object display method, device, electronic equipment and storage medium |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016071244A2 (en) * | 2014-11-06 | 2016-05-12 | Koninklijke Philips N.V. | Method and system of communication for use in hospitals |
CN105867617A (en) * | 2016-03-25 | 2016-08-17 | 京东方科技集团股份有限公司 | Augmented reality device and system and image processing method and device |
CN105955456A (en) * | 2016-04-15 | 2016-09-21 | 深圳超多维光电子有限公司 | Virtual reality and augmented reality fusion method, device and intelligent wearable equipment |
CN106162204A (en) * | 2016-07-06 | 2016-11-23 | 传线网络科技(上海)有限公司 | Panoramic video generation, player method, Apparatus and system |
CN106302132A (en) * | 2016-09-14 | 2017-01-04 | 华南理工大学 | A kind of 3D instant communicating system based on augmented reality and method |
CN106383587A (en) * | 2016-10-26 | 2017-02-08 | 腾讯科技(深圳)有限公司 | Augmented reality scene generation method, device and equipment |
CN106710002A (en) * | 2016-12-29 | 2017-05-24 | 深圳迪乐普数码科技有限公司 | AR implementation method and system based on positioning of visual angle of observer |
CN106973283A (en) * | 2017-03-30 | 2017-07-21 | 北京炫房科技有限公司 | A kind of method for displaying image and device |
US20170295229A1 (en) * | 2016-04-08 | 2017-10-12 | Osterhout Group, Inc. | Synchronizing head-worn computers |
CN107678538A (en) * | 2017-09-05 | 2018-02-09 | 北京原力创新科技有限公司 | Augmented reality system and information processing method therein, storage medium, processor |
CN108230428A (en) * | 2017-12-29 | 2018-06-29 | 掌阅科技股份有限公司 | E-book rendering method, electronic equipment and storage medium based on augmented reality |
-
2019
- 2019-02-26 CN CN201910142496.9A patent/CN109978945B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016071244A2 (en) * | 2014-11-06 | 2016-05-12 | Koninklijke Philips N.V. | Method and system of communication for use in hospitals |
CN105867617A (en) * | 2016-03-25 | 2016-08-17 | 京东方科技集团股份有限公司 | Augmented reality device and system and image processing method and device |
US20170295229A1 (en) * | 2016-04-08 | 2017-10-12 | Osterhout Group, Inc. | Synchronizing head-worn computers |
CN105955456A (en) * | 2016-04-15 | 2016-09-21 | 深圳超多维光电子有限公司 | Virtual reality and augmented reality fusion method, device and intelligent wearable equipment |
CN106162204A (en) * | 2016-07-06 | 2016-11-23 | 传线网络科技(上海)有限公司 | Panoramic video generation, player method, Apparatus and system |
CN106302132A (en) * | 2016-09-14 | 2017-01-04 | 华南理工大学 | A kind of 3D instant communicating system based on augmented reality and method |
CN106383587A (en) * | 2016-10-26 | 2017-02-08 | 腾讯科技(深圳)有限公司 | Augmented reality scene generation method, device and equipment |
CN106710002A (en) * | 2016-12-29 | 2017-05-24 | 深圳迪乐普数码科技有限公司 | AR implementation method and system based on positioning of visual angle of observer |
CN106973283A (en) * | 2017-03-30 | 2017-07-21 | 北京炫房科技有限公司 | A kind of method for displaying image and device |
CN107678538A (en) * | 2017-09-05 | 2018-02-09 | 北京原力创新科技有限公司 | Augmented reality system and information processing method therein, storage medium, processor |
CN108230428A (en) * | 2017-12-29 | 2018-06-29 | 掌阅科技股份有限公司 | E-book rendering method, electronic equipment and storage medium based on augmented reality |
Non-Patent Citations (2)
Title |
---|
MARUYAMA, KEISUKE 等: "Smart Glasses for Neurosurgical Navigation by Augmented Reality", 《OPERATIVE NEUROSURGERY》 * |
宋克凡: "全息视频会议研究", 《科技传播》 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021097803A1 (en) * | 2019-11-22 | 2021-05-27 | 北京小米移动软件有限公司 | Resource switching method and apparatus and storage medium |
CN111881861A (en) * | 2020-07-31 | 2020-11-03 | 北京市商汤科技开发有限公司 | Display method, device, equipment and storage medium |
CN113220251A (en) * | 2021-05-18 | 2021-08-06 | 北京达佳互联信息技术有限公司 | Object display method, device, electronic equipment and storage medium |
CN113220251B (en) * | 2021-05-18 | 2024-04-09 | 北京达佳互联信息技术有限公司 | Object display method, device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109978945B (en) | 2021-08-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11783536B2 (en) | Image occlusion processing method, device, apparatus and computer storage medium | |
CN106251403B (en) | A kind of methods, devices and systems of virtual three-dimensional Scene realization | |
CN107957775B (en) | Data object interaction method and device in virtual reality space environment | |
CN105704468B (en) | Stereo display method, device and electronic equipment for virtual and reality scene | |
CN106162203B (en) | Panoramic video playback method, player and wear-type virtual reality device | |
JP7271099B2 (en) | File generator and file-based video generator | |
US11282264B2 (en) | Virtual reality content display method and apparatus | |
CN110249626B (en) | Method and device for realizing augmented reality image, terminal equipment and storage medium | |
TW201709718A (en) | Method and apparatus for displaying a light field based image on a user's device, and corresponding computer program product | |
CN110648274B (en) | Method and device for generating fisheye image | |
CN110176032A (en) | A kind of three-dimensional rebuilding method and device | |
CN109978945A (en) | A kind of information processing method and device of augmented reality | |
WO2023207452A1 (en) | Virtual reality-based video generation method and apparatus, device, and medium | |
EP3899870B1 (en) | Cloud-based camera calibration | |
CN111737518A (en) | Image display method and device based on three-dimensional scene model and electronic equipment | |
CN114895796B (en) | Space interaction method and device based on panoramic image and application | |
US20150065221A1 (en) | Method and device for operating 3d virtual chessboard | |
CN110187774A (en) | The AR equipment and its entity mask method of optical perspective formula | |
CN110021071A (en) | Rendering method, device and equipment in a kind of application of augmented reality | |
US20130093850A1 (en) | Image processing apparatus and method thereof | |
CN109448117A (en) | Image rendering method, device and electronic equipment | |
CN109302561A (en) | A kind of image capture method, terminal and storage medium | |
CN111899349B (en) | Model presentation method and device, electronic equipment and computer storage medium | |
CN109862339A (en) | Reproducting method, device, system, storage medium and the processor of augmented reality | |
CN115834860A (en) | Background blurring method, apparatus, device, storage medium, and program product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |