[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN102509342A - Collaborative virtual and actual sheltering treatment method in shared enhanced real scene - Google Patents

Collaborative virtual and actual sheltering treatment method in shared enhanced real scene Download PDF

Info

Publication number
CN102509342A
CN102509342A CN2011102847343A CN201110284734A CN102509342A CN 102509342 A CN102509342 A CN 102509342A CN 2011102847343 A CN2011102847343 A CN 2011102847343A CN 201110284734 A CN201110284734 A CN 201110284734A CN 102509342 A CN102509342 A CN 102509342A
Authority
CN
China
Prior art keywords
observation station
real
real object
virtual objects
augmented reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011102847343A
Other languages
Chinese (zh)
Inventor
陈小武
赵沁平
金鑫
林鸿昌
宋亚斐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beihang University
Original Assignee
Beihang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beihang University filed Critical Beihang University
Priority to CN2011102847343A priority Critical patent/CN102509342A/en
Publication of CN102509342A publication Critical patent/CN102509342A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention discloses a collaborative virtual and actual sheltering treatment method in a shared enhanced real scene. The collaborative virtual and actual sheltering treatment method comprises the following steps of: generating an enhanced real scene; estimating the position of a point to be observed; projecting both a virtual object and a real object on an observation plane of the point to be observed, wherein the sheltering relationship between the virtual object and the real object is to be analyzed; taking the point to be observed as an initial point; taking a ray vertical to an imaging plane to scan the observation plane along the horizontal direction; carrying out sheltering encoding according to a successive sequence that the ray touches two sides of the virtual object and two sides of the real object when the ray scans the observation plane; determining whether the virtual object and the real object which are observed from the point to be observed are in either a sheltering relationship or a non-sheltering relationship according to the difference among sheltering codes; and specifically judging that the virtual object shelters the real object or the real object shelters the virtual object so as to reduce wrong judgment. According to the collaborative virtual and actual sheltering treatment method in the shared enhanced real scene, the quick judgment of the spatial sheltering relationship between the virtual object and the real object in a new video sequence region is achieved, and the accuracy in the judgment of the virtual and actual sheltering relationship in the enhanced real scene is improved.

Description

Cooperating type actual situation occlusion handling method in a kind of shared augmented reality scene
Technical field
The present invention relates to virtual reality, augmented reality and computer graphic image field, relate in particular to cooperating type actual situation occlusion handling method in a kind of shared augmented reality scene.
Background technology
The actual situation of augmented reality scene is blocked to handle mainly and is referred to: according to virtual objects and the mutual spatial positional information of real object in the augmented reality scene, judge that the phenomenon of blocking that also possibly exist between the processing virtual objects and real object perhaps covers phenomenon.The space hiding relation of virtual objects and real object; The actual situation that not only directly influences virtual environment and true environment merges degree; And can influence the visual vivid degree that the augmented reality scene generates, and can influence user dimensional orientation perception that in the augmented reality scene, obtains and the spatial interaction that carries out and operate.The actual situation occlusion handling method is the important technology guarantee that the actual situation scene combines together; It is a research focus of augmented reality; It is organic intersection of virtual reality, augmented reality, computer vision and correlative study direction; Can embody technical characterstics such as the actual situation combination, real-time, interactive of augmented reality, three-dimensional registration; Can for medical diagnosis and analysis mode, complex product design and manufacturing, physical construction planning and management, long-distance education with interactively exchanges, multi-user interactive digital entertainment, sports items training simulation provide technical support with application such as analysis, manned space flight and space exploration, emergency processing and Disaster Recovery Training, military topic simulation training and manoeuvres, thereby the theory and technology of promotion virtual reality and augmented reality develops and the practicability application.
Many well-known universities and research institution are actively developing the actual situation occlusion handling method research of augmented reality scene, and wherein the calculating of the scene depth between virtual objects and the real object is the key point that actual situation is blocked processing.The augmented reality scene that at present domestic and international present Research relates to often utilizes video sequence or particular sensor to describe true environment, utilizes the depth information process virtual objects of real scene and the space hiding relation between the real object.Because single video sequence can only be described the true environment in certain orientation and the scope; This makes and is difficult to satisfy the scene perception and the interactive operation of different azimuth based on the augmented reality scene of single video sequence, thereby need represent a plurality of orientation true environment of augmented reality scene with a plurality of video sequences.Adopt the main actual situation occlusion handling method of domestic and international present Research; The problem that exists is; Must calculate the real scene depth information of all videos sequence, when the video sequence quantity of describing true environment constantly increased, this required augmented reality system to have very strong computing power; And the real-time of reduction system significantly operation and user job is ageing, thereby has restricted the generation effect and the application of augmented reality scene.
Summary of the invention
The present invention proposes a kind of actual situation and block the associated treatment imagination: in augmented reality scene based on many video sequences based on many video sequences; When the video sequence quantity of describing true environment constantly increases; According to the attitude information between the video sequence; Utilize the corresponding actual situation object hiding relation of existing video sequence, work in coordination with the actual situation object hiding relation of judging that new video sequence is corresponding, obtain the needed great amount of calculation expense of real scene depth information of video image with minimizing.Under the prerequisite of not calculating the video image depth information, judge the situation that the situation that do not have the space hiding relation between virtual objects and the real object and virtual objects have blocked real object fast.
Technical scheme provided by the invention is:
Cooperating type actual situation occlusion handling method in a kind of shared augmented reality scene is characterized in that, comprises following steps:
Step 1, generation augmented reality scene;
The observation station position is treated in step 2, estimation; The virtual objects and the real object of hiding relation to be analyzed are all projected on the plane of vision of this observation station; Treat that with this observation station is an initial point, get ray, along continuous straight runs is inswept said plane of vision perpendicular to said imaging plane;
Step 3, during according to the inswept plane of vision of said ray, touch the sequencing of the both sides of the edge of virtual objects and real object, block coding;
The difference that step 4, basis are blocked coding confirms that virtual objects and real object that this treats that observation station observes are in hiding relation or are in the unshielding relation;
Step 5, further judgement are that void is blocked the real or real void of blocking, and reject to be in and judge regional observation station by accident, reduce erroneous judgement.
Preferably, cooperating type actual situation occlusion handling method in the described shared augmented reality scene is characterized in that, further judges it is that void is blocked the real or real void of blocking in the step 5 in the following manner:
Estimate that at first a virtual objects and real object are in the observation station P1 in the unshielding relation, all project to observation station P1 and P2 in the horizontal plane of setting, and also project to the central point Ov of virtual objects in the horizontal plane;
Secondly, the projection of P1 is connected with the projection of central point Ov, the projection of P2 is connected with the projection of central point Ov, thereby in horizontal plane, be formed on two the straight line P1Ov and the P2Ov of the projection place intersection of central point;
At last, according to the angle between coding and P1Ov and the P2Ov of blocking at observation station P1 place, confirm between virtual objects that this treats that observation station P2 observes and the real object it should is that void is blocked the real or real void of blocking.
Preferably; In the described shared augmented reality scene in the cooperating type actual situation occlusion handling method; In said step 2, a plurality of observation stations of treating of initial estimation, and estimated a plurality of observation stations of treating are carried out hiding relation respectively and judge; Guarantee to treat in the observation station at these; Having at least between virtual objects that an observation station observes and the real object is non-hiding relation, otherwise reappraises a plurality of observation stations of treating, is non-hiding relation up to obtaining between virtual objects that at least one observation station observes and the real object.
Preferably, in the described shared augmented reality scene in the cooperating type actual situation occlusion handling method, obtaining between virtual objects that at least two observation station Cx and Cy observe and the real object is non-hiding relation.
Preferably; In the described shared augmented reality scene in the cooperating type actual situation occlusion handling method; In the step 5; Rejecting is in the observation station in erroneous judgement zone; The method that reduces erroneous judgement is: according at least two the observation station Cx and the Cy that between virtual objects and the real object are non-hiding relation between analyzing virtual object and the real object be respectively hiding relation treat observation station Cj+1, if what analyze is that void blocks that real or real to block empty relation be consistent, think that then the analysis of treating observation station Cj+1 is correct.
Preferably; In the described shared augmented reality scene in the cooperating type actual situation occlusion handling method; According at least two the observation station Cx and the Cy that between virtual objects and the real object are non-hiding relation between analyzing virtual object and the real object be respectively hiding relation treat observation station Cj+1; If what analyze is that void is blocked real or it is inconsistent blocking empty relation in fact; Then think between virtual objects and the real object it is to have the observation station that is positioned at the erroneous judgement zone in the observation station of non-hiding relation, reject the observation station that is in the erroneous judgement zone, treat observation station Cj+1 with the observation station analysis that is in non-erroneous judgement zone.
Preferably, in the cooperating type actual situation occlusion handling method, reject the observation station that is in the erroneous judgement zone in the described shared augmented reality scene, treat that with the observation station analysis that is in non-erroneous judgement zone the method for observation station Cj+1 is following:
If according to two that between virtual objects and the real object are non-hiding relation treat observation station Cx and Cy between analyzing virtual object and the real object be respectively hiding relation treat observation station Cj+1,
Identical when the angular interval of Cx and Cy, block coding not simultaneously, if ∠ CxOvCj+1 and ∠ CyOvCj+1 all belong to [0,180), the size of ∠ CxOvCj+1 and ∠ CyOvCj+1 relatively then, angle smaller is in the regional observation station of non-erroneous judgement.
Identical when the angular interval of Cx and Cy, block coding not simultaneously, if ∠ CxOvCj+1 and ∠ CyOvCj+1 all belong to [180,360), the size of ∠ CxOvCj+1 and ∠ CyOvCj+1 relatively then, angle the greater is to be in the regional observation station of non-erroneous judgement.
Different when the angular interval of Cx and Cy, block coding when identical, the size of ∠ CxOvCj+1 and ∠ CyOvCj+1 relatively, and according to blocking coding, " void " and the hiding relation between " reality " of judgement Cj+1.
Preferably, in the cooperating type actual situation occlusion handling method, said generation augmented reality scene comprises following steps in the described shared augmented reality scene:
1) reality scene that has artificial target's thing of input;
2) according to the observation station position of this artificial target's thing calculating observation to this reality scene;
3) under the guide of this observation station position and artificial target's thing, put into dummy object, thereby generate the augmented reality scene.
Preferably, in the cooperating type actual situation occlusion handling method, have the real-world object that contains surface level in the said reality scene in the described shared augmented reality scene, said artificial target's thing level is seated on the said surface level.
The effect that the present invention is useful is: when the augmented reality scene has a large amount of video sequences and exists many " void " to block " reality " or do not have when blocking phenomenon, the present invention has significant advantage aspect the formation speed that reduces redundant computation that actual situation blocks, improves the augmented reality scene.
Description of drawings
Fig. 1 is that the formation synoptic diagram is blocked in the space of cooperating type actual situation occlusion handling method in the shared augmented reality scene of the present invention;
Fig. 2 is that the multiple-camera of the Same Scene of cooperating type actual situation occlusion handling method in the shared augmented reality scene of the present invention is taken synoptic diagram;
Fig. 3 is the space hiding relation synoptic diagram of the different angles of cooperating type actual situation occlusion handling method in the shared augmented reality scene of the present invention;
Fig. 4 is the processing procedure synoptic diagram when cooperating type actual situation occlusion handling method " void " blocks " reality " in the shared augmented reality scene of the present invention;
Fig. 5 is the processing procedure synoptic diagram of the no actual situation of cooperating type actual situation occlusion handling method in the shared augmented reality scene of the present invention when blocking;
Fig. 6 be cooperating type actual situation occlusion handling method in the shared augmented reality scene of the present invention user, virtual objects, real object project to the synoptic diagram on the surface level π;
Fig. 7 is that the occlusion area of cooperating type actual situation occlusion handling method in the shared augmented reality scene of the present invention is divided synoptic diagram;
Fig. 8 is the straight line synoptic diagram that the occlusion area of cooperating type actual situation occlusion handling method in the shared augmented reality scene of the present invention is divided;
Fig. 9 is the synoptic diagram of the projection of virtual objects and real object on the imaging plane of cooperating type actual situation occlusion handling method in the shared augmented reality scene of the present invention;
Figure 10 is the synoptic diagram of the regional code of cooperating type actual situation occlusion handling method in the shared augmented reality scene of the present invention;
Figure 11 is the area schematic after the merging of cooperating type actual situation occlusion handling method in the shared augmented reality scene of the present invention;
Figure 12 is the approximate processing synoptic diagram of cooperating type actual situation occlusion handling method in the shared augmented reality scene of the present invention;
Figure 13 is the erroneous judgement situation key diagram of cooperating type actual situation occlusion handling method in the shared augmented reality scene of the present invention;
Figure 14 is that tactful synoptic diagram is eliminated in the erroneous judgement of cooperating type actual situation occlusion handling method in the shared augmented reality scene of the present invention.
Embodiment
Below in conjunction with accompanying drawing the present invention is done further detailed description, can implement according to this with reference to the instructions literal to make those skilled in the art.
The present invention provides cooperating type actual situation occlusion handling method in a kind of shared augmented reality scene, wherein, may further comprise the steps:
Step 1, as shown in Figure 1, for vision point, some B is invisible if the some A on the sight line VO is visible, is an A and blocks a B; In the augmented reality scene, distance value VA and VB are defined as the scene depth of an A and B respectively, so the hiding relation between any 2 of the scene is relevant with viewpoint orientation, scene depth etc., if apart from VA less than VB; Then A blocks B, is designated as A<B, on the contrary, if having apart from VA greater than VB; Then B blocks A, is designated as B<A, with<p 1, P 2,<> COThe space hiding relation set of A and B under the expression sight line VO, then<p 1, P 2,<> COComprise elements A<B, also comprise element B<A, when not blocking mutually between A and the B,<p 1, P 2,<> COBe empty set
Figure BDA0000093589460000061
Obtain in the video image process of true environment at video camera, if after 2 A of certain of true environment and the projective transformation of B process, corresponding to the same point P on the two dimensional image plane, the distance of A and imaging plane is Z1; The distance of B and imaging plane is Z2, and as Z1 during greater than Z2, the P on the image is the subpoint of B, and promptly B has blocked A; Be designated as B<A, same, in the augmented reality scene, if A be on the virtual objects surface a bit; And B be on the real object surface a bit, through after the projective transformation corresponding to the same point P on the two dimensional image plane, the distance of A and imaging plane is Z1; The distance of B and imaging plane is Z2, and as Z1 during greater than Z2, the P on the image is the subpoint of B; Be that B has blocked A, be designated as B<A
Step 2, as shown in Figure 2; Import one contain surface level reality scene; The artificial target's thing R that has in the scene is placed horizontally on this surface level, according to the observation station position of this artificial target's thing calculating observation to this reality scene, under the guide of this observation station position and artificial target's thing; Put into dummy object V, thereby generate the augmented reality scene;
Step 3, in the augmented reality scene that step 2 generates, estimate a plurality of observation station C1, C2, C3, C4 of treating; Place video camera then respectively, the virtual objects and the real object of hiding relation to be analyzed all projected on this plane of vision of treating observation station, and estimated a plurality of observation stations of treating are carried out hiding relation respectively and judged; Guarantee to treat in the observation station at these; Having at least between virtual objects that an observation station observes and the real object is non-hiding relation, otherwise reappraises a plurality of observation stations of treating, is non-hiding relation up to obtaining between virtual objects that at least one observation station observes and the real object; Shown in Fig. 3-a; In augmented reality scene, there is not the space hiding relation between virtual objects V and the real object R based on video sequence C1<v, R,<> C1Be empty set
Figure BDA0000093589460000071
Shown in Fig. 3-b, in augmented reality scene, there is the space hiding relation between virtual objects V and the real object R, and is that R has blocked V based on video sequence C2, so<v, R,<> C2Be R<V, shown in Fig. 3-c, in augmented reality scene, do not have the space hiding relation between virtual objects V and the real object R based on video sequence C3,<v, R,<> C3Be empty set
Figure BDA0000093589460000072
Shown in Fig. 3-d, in augmented reality scene, there is the space hiding relation between virtual objects V and the real object R, and is that V has blocked R based on video sequence C4, so<v, R,<> C4Be V<R, can judge thus<v, R,<> C1With<v, R,<> C3Be empty set
Figure BDA0000093589460000073
There is not the space hiding relation between virtual objects V and the real object R, can directly draws virtual objects V respectively at video sequence C1 and C3,<v, R,<> C4Be V<R, virtual objects V has blocked real object R, can directly draw virtual objects V at video sequence C4;
Step 4, as shown in Figure 6 is shown virtual objects with the cube body surface, representes real object with right cylinder; Virtual objects and real object are projected to plane π respectively, and the video camera imaging plane is vertical with π, when the user utilizes the real object of the video sequence perception augmented reality scene that video camera obtains; User's viewpoint orientation is identical with camera orientation, and is for each zone of confirming to have hiding relation and do not have hiding relation, as shown in Figure 7; On the π of plane, mark off four zones with two straight lines, comprising: " reality " blocked " void " zone; The user can have been blocked virtual objects by the perception real object through this regional video sequence, and " void " is blocked " reality " zone; Be that the user passes through this regional video sequence, can have blocked real object by the perception virtual objects, and not have occlusion area; Be that the user passes through this regional video sequence, can the perception virtual objects and real object between do not exist and block phenomenon, if virtual objects and the real object projection bounding box on the π of plane is " circle "; The straight line L1 and the L2 that then divide occlusion area can be the internal common tangent of circle, and virtual objects and the real object projection bounding box on the π of plane is " convex polygon ", and the straight line L1 and the L2 that then divide occlusion area are the Support Level of convex polygon; Shown in Figure 8, Support Level through polygonal summit, makes convex polygon lay respectively at the both sides of Support Level respectively; As shown in Figure 9; For the video camera in C1 and two orientation of C2, the projection of virtual objects and real object is not occured simultaneously on the imaging plane of C1, and the projection of virtual objects and real object has common factor on the imaging plane of C2; So there is not hiding relation in the augmented reality scene in C1 orientation, and there is hiding relation in the augmented reality scene in C2 orientation;
Step 5, shown in figure 10 supposes that circle is the projection of real object on the π of plane, and rectangle is the projection of virtual objects on the π of plane; The imaging plane of orientation C1 is projected as straight line S1 on the π of plane, with the circular contour of the rectangular profile of virtual objects and real object respectively perspective projection to S1, from the left end of S1 to right-hand member scanning (CW); The perspective projection frontier point that runs into the circular contour of real object then is designated as 1; The perspective projection frontier point that runs into the rectangular profile of virtual objects then is designated as 0, on the S1 of orientation C1, runs into the perspective projection frontier point of the circular contour of twice real object earlier; And then run into the perspective projection frontier point of the rectangular profile of virtual objects; So the augmented reality scene actual situation object hiding relation based on orientation C1 video sequence can be expressed as 1100, the coding of c1 region also is 1100, and the coding of position c2, c3 and c4 region is respectively 1001; 0011 and 0101.Can find out from the rule of coding; The internal common tangent L1 and the L2 of the circular bounding box of the projection of virtual objects and real object have been divided into four zones with the plane, comprise the zone (0011 district and 1100 districts) that does not have hiding relation, zone and other zone that has hiding relation;
Step 6, shown in figure 11; Merge through the zone and to be divided into three zones: 0011 district, 1100 districts and other zone, other zone comprise that " reality " block the zone of " void " and the zone that " void " is blocked " reality ", suppose that C1 is first video sequence; In augmented reality scene based on video sequence C1; The projection on the C1 imaging plane of virtual objects and real object is not occured simultaneously, and does not exist between virtual objects and the real object and blocks phenomenon, promptly<v, R,<> C1Be empty set
Figure BDA0000093589460000091
Block coding strategy according to actual situation; Hiding relation between virtual objects and the real object is encoded to 1100; Suppose that C2 is second video sequence, in the augmented reality scene based on video sequence C2, the projection on the C2 imaging plane of virtual objects and real object has common factor; Exist between virtual objects and the real object and block phenomenon, promptly<v, R,<> C2Be nonvoid set; But need judge: be that " reality " blocked " void "? Has still " void " blocked " reality "? As far as video sequence C2; Video sequence C1 and attitude information thereof, actual situation hiding relation are known; Intersection point O with two internal common tangent L1 and L2 is a central point, according to the M of video sequence C1 C1, video sequence C2 M C2, can calculate angle ∠ C1OC2, and ∠ C1OC2 ∈ [0; 180), according to the definition on " left side " and " right side ", C2 is on the left side of C1; Because the hiding relation of C1 coding 1100, C2 are having " 00 " for after " 11 " are arranged earlier on the left side of C1 on the imaging plane of C1 from left to right, can judge has " void " after C2 has " reality " earlier; So in the augmented reality scene based on video sequence C2, exist " reality " to block the hiding relation of " void " between virtual objects and the real object, establishing C3 is the 3rd video sequence; In augmented reality scene based on video sequence C3; The projection on the C3 imaging plane of virtual objects and real object is not occured simultaneously, and does not exist between virtual objects and the real object and blocks phenomenon, promptly<v, R,<> C3Be empty set
Figure BDA0000093589460000092
Block coding strategy according to actual situation; Hiding relation between virtual objects and the real object is encoded to 0011; If C4 is the 4th video sequence, in the augmented reality scene based on video sequence C4, the projection on the C4 imaging plane of virtual objects and real object has common factor; Exist between virtual objects and the real object and block phenomenon, promptly<v, R,<> C4Be nonvoid set, as far as video sequence C4, video sequence C1 and attitude information thereof, actual situation hiding relation are known, are central point with the intersection point O of two internal common tangent L1 and L2, according to the M of video sequence C1 C1, video sequence C4 M C4, can calculate angle ∠ C1OC4, and ∠ C1OC4 ∈ [180,360), according to the definition on " left side " and " right side ", C4 is on the right of C1.Because the hiding relation of C1 coding 1100, C4 are on the right of C1; For after " 00 " is arranged earlier " 11 " are arranged at right-to-left on the imaging plane of C1, can judge has " reality " after C4 has " void " earlier, so in the augmented reality scene based on video sequence C4; Exist " void " to block the hiding relation of " reality " between virtual objects and the real object; As far as video sequence C4, if video sequence C3 and attitude information thereof, actual situation hiding relation are known, according to the M of video sequence C3 C3, video sequence C4 M C4, C3 hiding relation coding 0011; Similarly; Also can judge in augmented reality scene, exist " void " to block the hiding relation of " reality " between virtual objects and the real object based on video sequence C4, suppose existing video sequence C1, C2 ..., Cj and scene change matrix M thereof C1, M C2..., M CjEncode with hiding relation<v, R,<> C1,<v, R,<> C2...,<v, R,<> Cj, for new video sequence Cj+1, the collaborative evaluation algorithm of actual situation hiding relation can be described as:
(1) if be encoded to 0011 or 1100 based on the actual situation hiding relation of the augmented reality scene of video sequence Cj+1, then can judge and do not have the actual situation hiding relation, algorithm finishes, otherwise, go to (2),
(2) exist<v, R,<> C1,<v, R,<> C2...,<v, R,<> CjHiding relation coding in, if having 0011, then go to (3); If have 1100, then go to (4); If neither have 0011, do not have 1100 yet, then can only utilize the video image depth information to calculate the hiding relation between the actual situation object, algorithm finishes,
(3) owing to exist hiding relation to be encoded to 0011<v, R,<> CxIf Cj+1 is having " 11 " for after " 00 " is arranged earlier on the Cx left side on the imaging plane of Cx from left to right, can judge has " reality " after Cj+1 has " void " earlier, exists " void " to block the hiding relation of " reality " between virtual objects and the real object; If Cj+1 is on Cx the right; For after " 11 " are arranged earlier " 00 " is arranged at right-to-left on the imaging plane of Cx, can judge has " void " after Cj+1 has " reality " earlier, exists " reality " to block the hiding relation of " void " between virtual objects and the real object; Algorithm finishes
(4) owing to exist hiding relation to be encoded to 1100<v, R,<> CyIf Cj+1 is having " 00 " for after " 11 " are arranged earlier on the Cy left side on the imaging plane of Cy from left to right, can judge has " void " after Cj+1 has " reality " earlier, exists " reality " to block the hiding relation of " void " between virtual objects and the real object; If Cj+1 is on Cy the right; For after " 00 " is arranged earlier " 11 " are arranged at right-to-left on the imaging plane of Cy; Can judge has " reality " after Cj+1 has " void " earlier, exists " void " to block the hiding relation of " reality " between virtual objects and the real object, and algorithm finishes;
Step 7, shown in figure 12; Owing to be difficult to obtain projection of shape and the area dividing straight line of real object π, thereby be difficult to confirm internal common tangent L1 and L2 and intersection point O thereof, so with the approximate point of the projected centre point Ov of virtual objects on the π of plane as O on the plane through the video sequence of augmented reality scene; Shown in Figure 13-a; Block " void " zone if new video sequence Cj+1 is in " reality ", then existing video sequence C0011 and the angle ∠ C0011OvCj+1 between the Cj+1 in 0011 zone can be less than 180 degree, and the existing video sequence C1100 in 1100 zones and the angle ∠ C1100OvCj+1 between the Cj+1 can be greater than 180 degree; So can not judge by accident between the video sequence " about " relation; Can not produce the hiding relation erroneous judgement between the actual situation object yet, promptly can not judge by accident to " void " and block " reality " relation, shown in Figure 13-b; If being in " void ", new video sequence Cj+1 blocks " reality " zone; Then the situation greater than 180 degree may appear in existing video sequence C0011 and the angle ∠ C0011OvCj+1 between the Cj+1 in 0011 zone, and perhaps the situation less than 180 degree may appear in the existing video sequence C1100 in 1100 zones and the angle ∠ C1100OvCj+1 between the Cj+1, thus may judge by accident between the video sequence " about " relation; Also may produce the hiding relation erroneous judgement between the actual situation object; Promptly may judge by accident to " reality " and block " void " relation, shown in Figure 13-b, block " reality " zone if new video sequence Cj+1 is in " void "; Straight line through Cj+1 and Ov intersects with L1 or L2, forms erroneous judgement zone (darker regions among the figure).When C0011 or C1100 are positioned at the erroneous judgement zone, just maybe " void " be blocked " reality " and be judged as the relation that " reality " blocked " void ".But the erroneous judgement phenomenon can not influence the drawing result that " void " is blocked " reality "; This is owing to be directed against the judged result that " reality " blocked " void "; Need be through the depth information of video image; Draw the occlusion effect between the actual situation object, so in last augmented reality scene, still show the drawing result that " void " is blocked " reality ".Certainly; This erroneous judgement phenomenon can not be saved the video image depth calculation expense that actual situation is blocked processing; Consider that probability of miscarriage of justice is often less, and can reduce, thereby can use the approximate point of the projected centre point Ov of virtual objects on the π of plane as O with the increase of video sequence quantity;
Step 8, shown in figure 14 is calculated ∠ CxOvCj+1 and ∠ CyOvCj+1, if ∠ CxOvCj+1 and ∠ CyOvCj+1 are in same angular interval [0; 180), perhaps same angular interval [180,360); The angular interval of then claiming Cx and Cy is identical, otherwise claims that the angular interval of Cx and Cy is different.If Cx and Cy have identical hiding relation coding 0011, perhaps 1100, then claim Cx and Cy to block coding identical, otherwise claim Cx and Cy to block coding different, when Cx, Cy have and have only one to be positioned at and to judge by accident when regional, have:
(1) shown in Figure 14-a, identical when the angular interval of Cx and Cy, block coding not simultaneously, if ∠ CxOvCj+1 and ∠ CyOvCj+1 all belong to [0; 180), then compare the size of ∠ CxOvCj+1 and ∠ CyOvCj+1, might as well establish ∠ CxOvCj+1 less than ∠ CyOvCj+1; So get Cx; According to the hiding relation coding of ∠ CxOvCj+1 and Cx, judge " void " and the hiding relation between " reality " of video sequence Cj+1
(2) shown in Figure 14-b, identical when the angular interval of Cx and Cy, block coding not simultaneously, if ∠ CxOvCj+1 and ∠ CyOvCj+1 all belong to [180; 360), then compare the size of ∠ CxOvCj+1 and ∠ CyOvCj+1, might as well establish ∠ CxOvCj+1 greater than ∠ CyOvCj+1; So get Cx; According to the hiding relation coding of ∠ CxOvCj+1 and Cx, judge " void " and the hiding relation between " reality " of video sequence Cj+1
(3) shown in Figure 14-c; Different when the angular interval of Cx and Cy, to block coding identical and when being 0011, then the size of ∠ CxOvCj+1 and ∠ CyOvCj+1 relatively might as well be established ∠ CxOvCj+1 less than ∠ CyOvCj+1; So get Cx; According to the hiding relation coding of ∠ CxOvCj+1 and Cx, judge " void " and the hiding relation between " reality " of video sequence Cj+1
(4) shown in Figure 14-d; Different when the angular interval of Cx and Cy, to block coding identical and when being 1100; Then compare the size of ∠ CxOvCj+1 and ∠ CyOvCj+1, might as well establish ∠ CxOvCj+1 greater than ∠ CyOvCj+1, so get Cx; According to the hiding relation coding of ∠ CxOvCj+1 and Cx, judge " void " and the hiding relation between " reality " of video sequence Cj+1;
Step 9, to Ov as the erroneous judgement problem that the approximate point of O brings, judge according to following steps, to reduce to judge by accident probability:
(1) block " void " when zone when new video sequence is positioned at " reality ", phenomenon can not occur judging by accident,
(2) block " reality " when zone when new video sequence is positioned at " void ", phenomenon may occur judging by accident,
(3) block the new video sequences Cj+1 in " reality " zone to being in " void ",, phenomenon then can not occur judging by accident if there be C0011 or the C1100 that is not positioned at the erroneous judgement zone,
(4) the less erroneous judgement phenomenon of probability of happening; Be " void " to be blocked " reality " erroneous judgement block " void " for " reality "; Depth information through video image subsequently; Draw out " void " and block the correct effect of " reality ", can not draw out the wrong effect that " reality " blocked " void ", just can not save the video image depth calculation expense that actual situation is blocked processing.
Although embodiment of the present invention are open as above; But it is not restricted to listed utilization in instructions and the embodiment; It can be applied to various suitable the field of the invention fully, for being familiar with those skilled in the art, can easily realize other modification; Therefore under the universal that does not deviate from claim and equivalency range and limited, the legend that the present invention is not limited to specific details and illustrates here and describe.

Claims (9)

1. cooperating type actual situation occlusion handling method in the shared augmented reality scene is characterized in that, comprises following steps:
Step 1, generation augmented reality scene;
The observation station position is treated in step 2, estimation; The virtual objects and the real object of hiding relation to be analyzed are all projected on this plane of vision of treating observation station; Treat that with this observation station is an initial point, get ray, along continuous straight runs is inswept said plane of vision perpendicular to said imaging plane;
Step 3, during according to the inswept plane of vision of said ray, touch the sequencing of the both sides of the edge of virtual objects and real object, block coding;
The difference that step 4, basis are blocked coding confirms that virtual objects and real object that this treats that observation station observes are in hiding relation or are in the unshielding relation;
Step 5, further judgement are that void is blocked the real or real void of blocking, and reject to be in and judge regional observation station by accident, reduce erroneous judgement.
2. cooperating type actual situation occlusion handling method in the shared augmented reality scene as claimed in claim 1 is characterized in that, further judges it is that void is blocked the real or real void of blocking in the step 5 in the following manner:
Estimate that at first a virtual objects and real object are in the observation station P1 in the unshielding relation, all project to observation station P1 and P2 in the horizontal plane of setting, and also project to the central point Ov of virtual objects in the horizontal plane;
Secondly, the projection of P1 is connected with the projection of central point Ov, the projection of P2 is connected with the projection of central point Ov, thereby in horizontal plane, be formed on two the straight line P1Ov and the P2Ov of the projection place intersection of central point;
At last, according to the angle between coding and P1Ov and the P2Ov of blocking at observation station P1 place, confirm between virtual objects that this treats that observation station P2 observes and the real object it should is that void is blocked the real or real void of blocking.
3. cooperating type actual situation occlusion handling method in the shared augmented reality scene as claimed in claim 1; It is characterized in that; In said step 2, a plurality of observation stations of treating of initial estimation, and estimated a plurality of observation stations of treating are carried out hiding relation respectively and judge; Guarantee to treat in the observation station at these; Having at least between virtual objects that an observation station observes and the real object is non-hiding relation, otherwise reappraises a plurality of observation stations of treating, is non-hiding relation up to obtaining between virtual objects that at least one observation station observes and the real object.
4. cooperating type actual situation occlusion handling method in the shared augmented reality scene as claimed in claim 3 is characterized in that, obtaining between virtual objects that at least two observation station Cx and Cy observe and the real object is non-hiding relation.
5. cooperating type actual situation occlusion handling method in the shared augmented reality scene as claimed in claim 4; It is characterized in that; According at least two the observation station Cx and the Cy that between virtual objects and the real object are non-hiding relation between analyzing virtual object and the real object be respectively hiding relation treat observation station Cj+1; If what analyze is that void is blocked real or it is consistent blocking empty relation in fact, think that then the analysis of treating observation station Cj+1 is correct.
6. cooperating type actual situation occlusion handling method in the shared augmented reality scene as claimed in claim 4; It is characterized in that; In the step 5; Rejecting is in the observation station in erroneous judgement zone; The method that reduces erroneous judgement is: according at least two the observation station Cx and the Cy that between virtual objects and the real object are non-hiding relation between analyzing virtual object and the real object be respectively hiding relation treat observation station Cj+1, if what analyze is that void blocks that real or real to block empty relation be inconsistent, then think between virtual objects and the real object it is to have the observation station that is positioned at the erroneous judgement zone in the observation station of non-hiding relation; Rejecting is in the observation station in erroneous judgement zone, treats observation station Cj+1 with the observation station analysis that is in non-erroneous judgement zone.
7. cooperating type actual situation occlusion handling method in the shared augmented reality scene as claimed in claim 6 is characterized in that, rejects the observation station that is in the erroneous judgement zone, treats that with the observation station analysis that is in non-erroneous judgement zone the method for observation station Cj+1 is following:
If according to two that between virtual objects and the real object are non-hiding relation treat observation station Cx and Cy between analyzing virtual object and the real object be respectively hiding relation treat observation station Cj+1,
Identical when the angular interval of Cx and Cy, block coding not simultaneously, if ∠ CxOvCj+1 and ∠ CyOvCj+1 all belong to [0,180), the size of ∠ CxOvCj+1 and ∠ CyOvCj+1 relatively then, angle smaller is in the regional observation station of non-erroneous judgement.
Identical when the angular interval of Cx and Cy, block coding not simultaneously, if ∠ CxOvCj+1 and ∠ CyOvCj+1 all belong to [180,360), the size of ∠ CxOvCj+1 and ∠ CyOvCj+1 relatively then, angle the greater is to be in the regional observation station of non-erroneous judgement.
Different when the angular interval of Cx and Cy, block coding when identical, the size of ∠ CxOvCj+1 and ∠ CyOvCj+1 relatively, and according to blocking coding, " void " and the hiding relation between " reality " of judgement Cj+1.
8. cooperating type actual situation occlusion handling method in the shared augmented reality scene as claimed in claim 1 is characterized in that, said generation augmented reality scene comprises following steps:
1) reality scene that has artificial target's thing of input;
2) according to the observation station position of this artificial target's thing calculating observation to this reality scene;
3) under the guide of this observation station position and artificial target's thing, put into dummy object, thereby generate the augmented reality scene.
9. cooperating type actual situation occlusion handling method in the shared augmented reality scene as claimed in claim 8 is characterized in that, has the real-world object that contains surface level in the said reality scene, and said artificial target's thing level is seated on the said surface level.
CN2011102847343A 2011-09-22 2011-09-22 Collaborative virtual and actual sheltering treatment method in shared enhanced real scene Pending CN102509342A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011102847343A CN102509342A (en) 2011-09-22 2011-09-22 Collaborative virtual and actual sheltering treatment method in shared enhanced real scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011102847343A CN102509342A (en) 2011-09-22 2011-09-22 Collaborative virtual and actual sheltering treatment method in shared enhanced real scene

Publications (1)

Publication Number Publication Date
CN102509342A true CN102509342A (en) 2012-06-20

Family

ID=46221419

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011102847343A Pending CN102509342A (en) 2011-09-22 2011-09-22 Collaborative virtual and actual sheltering treatment method in shared enhanced real scene

Country Status (1)

Country Link
CN (1) CN102509342A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
CN103984490A (en) * 2014-04-24 2014-08-13 北京掌阔移动传媒科技有限公司 Mobile terminal-based object tracking method and mobile terminal-based object tracking device
CN104995665A (en) * 2012-12-21 2015-10-21 Metaio有限公司 Method for representing virtual information in a real environment
CN105493154A (en) * 2013-08-30 2016-04-13 高通股份有限公司 System and method for determining the extent of a plane in an augmented reality environment
CN105513112A (en) * 2014-10-16 2016-04-20 北京畅游天下网络技术有限公司 Image processing method and device
CN105809667A (en) * 2015-01-21 2016-07-27 瞿志行 Shading effect optimization method based on depth camera in augmented reality
US9922446B2 (en) 2012-12-21 2018-03-20 Apple Inc. Method for representing virtual information in a real environment
CN109074681A (en) * 2016-04-18 2018-12-21 索尼公司 Information processing unit, information processing method and program
CN110554770A (en) * 2018-06-01 2019-12-10 苹果公司 Static shelter
CN110825279A (en) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 Method, apparatus and computer readable storage medium for inter-plane seamless handover
CN111124112A (en) * 2019-12-10 2020-05-08 北京一数科技有限公司 Interactive display method and device for virtual interface and entity object
CN112074800A (en) * 2018-05-08 2020-12-11 苹果公司 Techniques for switching between immersion levels
CN112753050A (en) * 2018-09-28 2021-05-04 索尼公司 Information processing apparatus, information processing method, and program
US11392636B2 (en) 2013-10-17 2022-07-19 Nant Holdings Ip, Llc Augmented reality position-based service, methods, and systems
US11854153B2 (en) 2011-04-08 2023-12-26 Nant Holdings Ip, Llc Interference based augmented reality hosting platforms
US12118581B2 (en) 2011-11-21 2024-10-15 Nant Holdings Ip, Llc Location-based transaction fraud mitigation methods and systems

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060204077A1 (en) * 2004-11-24 2006-09-14 Ser-Nam Lim System and method for fast illumination-invariant background subtraction using two views
CN101110908A (en) * 2007-07-20 2008-01-23 西安宏源视讯设备有限责任公司 Foreground depth of field position identification device and method for virtual studio system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060204077A1 (en) * 2004-11-24 2006-09-14 Ser-Nam Lim System and method for fast illumination-invariant background subtraction using two views
CN101110908A (en) * 2007-07-20 2008-01-23 西安宏源视讯设备有限责任公司 Foreground depth of field position identification device and method for virtual studio system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIN JIN ETC.: "Cooperatively Resolving Occlusion between Real and Virtual in Multiple Video Sequences", 《2011 SIXTH ANNUAL CHINAGRID CONFERENCE (CHINAGRID)》, 23 August 2011 (2011-08-23), pages 234 - 240, XP032063961, DOI: 10.1109/ChinaGrid.2011.49 *

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11869160B2 (en) 2011-04-08 2024-01-09 Nant Holdings Ip, Llc Interference based augmented reality hosting platforms
US11967034B2 (en) 2011-04-08 2024-04-23 Nant Holdings Ip, Llc Augmented reality object management system
US11854153B2 (en) 2011-04-08 2023-12-26 Nant Holdings Ip, Llc Interference based augmented reality hosting platforms
US12118581B2 (en) 2011-11-21 2024-10-15 Nant Holdings Ip, Llc Location-based transaction fraud mitigation methods and systems
US9922446B2 (en) 2012-12-21 2018-03-20 Apple Inc. Method for representing virtual information in a real environment
US10878617B2 (en) 2012-12-21 2020-12-29 Apple Inc. Method for representing virtual information in a real environment
CN104995665A (en) * 2012-12-21 2015-10-21 Metaio有限公司 Method for representing virtual information in a real environment
CN105493154A (en) * 2013-08-30 2016-04-13 高通股份有限公司 System and method for determining the extent of a plane in an augmented reality environment
CN105493154B (en) * 2013-08-30 2019-05-31 高通股份有限公司 System and method for determining the range of the plane in augmented reality environment
CN103489214A (en) * 2013-09-10 2014-01-01 北京邮电大学 Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
US11392636B2 (en) 2013-10-17 2022-07-19 Nant Holdings Ip, Llc Augmented reality position-based service, methods, and systems
US12008719B2 (en) 2013-10-17 2024-06-11 Nant Holdings Ip, Llc Wide area augmented reality location-based services
CN103984490A (en) * 2014-04-24 2014-08-13 北京掌阔移动传媒科技有限公司 Mobile terminal-based object tracking method and mobile terminal-based object tracking device
CN105513112A (en) * 2014-10-16 2016-04-20 北京畅游天下网络技术有限公司 Image processing method and device
CN105513112B (en) * 2014-10-16 2018-11-16 北京畅游天下网络技术有限公司 Image processing method and device
CN105809667B (en) * 2015-01-21 2018-09-07 瞿志行 Shading effect optimization method based on depth camera in augmented reality
CN105809667A (en) * 2015-01-21 2016-07-27 瞿志行 Shading effect optimization method based on depth camera in augmented reality
CN109074681B (en) * 2016-04-18 2023-03-21 索尼公司 Information processing apparatus, information processing method, and program
CN109074681A (en) * 2016-04-18 2018-12-21 索尼公司 Information processing unit, information processing method and program
CN112074800A (en) * 2018-05-08 2020-12-11 苹果公司 Techniques for switching between immersion levels
CN112074800B (en) * 2018-05-08 2024-05-07 苹果公司 Techniques for switching between immersion levels
CN110554770A (en) * 2018-06-01 2019-12-10 苹果公司 Static shelter
CN110825279A (en) * 2018-08-09 2020-02-21 北京微播视界科技有限公司 Method, apparatus and computer readable storage medium for inter-plane seamless handover
CN112753050A (en) * 2018-09-28 2021-05-04 索尼公司 Information processing apparatus, information processing method, and program
CN112753050B (en) * 2018-09-28 2024-09-13 索尼公司 Information processing device, information processing method, and program
CN111124112A (en) * 2019-12-10 2020-05-08 北京一数科技有限公司 Interactive display method and device for virtual interface and entity object

Similar Documents

Publication Publication Date Title
CN102509342A (en) Collaborative virtual and actual sheltering treatment method in shared enhanced real scene
Murray et al. Using real-time stereo vision for mobile robot navigation
Carozza et al. Markerless vision‐based augmented reality for urban planning
JP6131704B2 (en) Detection method for continuous road segment and detection device for continuous road segment
US9066089B2 (en) Stereoscopic image display device and stereoscopic image display method
CN105279789B (en) A kind of three-dimensional rebuilding method based on image sequence
CN102136155A (en) Object elevation vectorization method and system based on three dimensional laser scanning
KR101002785B1 (en) Method and System for Spatial Interaction in Augmented Reality System
CN103489214A (en) Virtual reality occlusion handling method, based on virtual model pretreatment, in augmented reality system
CN104778694A (en) Parameterized and automatic geometric correction method for multi-projector tiled display
CN104463899A (en) Target object detecting and monitoring method and device
CN102768767B (en) Online three-dimensional reconstructing and locating method for rigid body
CN104021538A (en) Object positioning method and device
CN104134235A (en) Real space and virtual space fusion method and real space and virtual space fusion system
Stucker et al. ResDepth: Learned residual stereo reconstruction
Grehl et al. Towards virtualization of underground mines using mobile robots–from 3D scans to virtual mines
CN104123715B (en) Configure the method and system of parallax value
Huang et al. Fast initialization method for monocular slam based on indoor model
CN106303501A (en) Stereo-picture reconstructing method based on image sparse characteristic matching and device
Roh et al. 3-D object recognition using a new invariant relationship by single-view
Skulimowski et al. Refinement of depth from stereo camera ego-motion parameters
Kitayama et al. 3D map construction based on structure from motion using stereo vision
CN113256773A (en) Surface grid scanning and displaying method, system and device
KR100434877B1 (en) Method and apparatus for tracking stereo object using diparity motion vector
Anwer et al. Calculating real world object dimensions from Kinect RGB-D image using dynamic resolution

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120620