[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN101931824B - Image processing apparatus, and image processing method - Google Patents

Image processing apparatus, and image processing method Download PDF

Info

Publication number
CN101931824B
CN101931824B CN2010101815482A CN201010181548A CN101931824B CN 101931824 B CN101931824 B CN 101931824B CN 2010101815482 A CN2010101815482 A CN 2010101815482A CN 201010181548 A CN201010181548 A CN 201010181548A CN 101931824 B CN101931824 B CN 101931824B
Authority
CN
China
Prior art keywords
image
shooting time
view
time difference
demonstration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN2010101815482A
Other languages
Chinese (zh)
Other versions
CN101931824A (en
Inventor
桑原立
横山和也
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN101931824A publication Critical patent/CN101931824A/en
Application granted granted Critical
Publication of CN101931824B publication Critical patent/CN101931824B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • H04N13/178Metadata, e.g. disparity information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/128Adjusting depth or disparity
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/356Image reproducers having separate monoscopic and stereoscopic modes
    • H04N13/359Switching between monoscopic and stereoscopic modes

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Library & Information Science (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention relates to an image processing apparatus, an image processing method, and a program. The image processing apparatus includes a receiving unit receiving communication data including L and R images, an attribute information acquisition unit acquiring, from the communication data, attribute information including a photographing time, and an output control unit analyzing the images and the attribute information and switching between three-dimensional image display and two-dimensional image display. If L and R images photographed at the same photographing time have been acquired, the control unit performs three-dimensional image display. If not, the control unit determines whether or not an object imaging position error occurring in three-dimensional image display using L and R images photographed at different photographing times does not exceed a preset permissible object imaging position error, performs three-dimensional image display using the L and R images photographed at different photographing times if the error does not exceed the permissible error, and performs two-dimensional image display if the error exceeds the permissible error.

Description

Image processing equipment and image processing method
Technical field
The present invention relates to a kind of image processing equipment, image processing method and program.In more detail; The present invention relates to such image processing equipment, image processing method and program: receive by being used for taking the captured image of a plurality of cameras that will be applied to three-dimensional (3D:3 dimension) image images displayed via network, and the image that receives is presented on the display unit.
Background technology
In recent years, be used to show 3-D view, promptly the system of 3D (3 dimension) image is developed energetically and is used.Be used to show that the representative system of 3D rendering comprises: passive stero and active stero.
Polarizing filter makes that only permission is passed through along the light that specific direction vibrates to passive stero through for example using, and uses image by beholder's the left eye and the left eye of eye viewing with image and right eye respectively to produce respectively.Forming before output light by the image display device images displayed arrives beholder's eyes, this light is divided into left eye through polarizing filter and uses up with right eye and use up.For example, through the polarising glass that the beholder wears, left eye only is imported into left eye with imaging, and is not imported into right eye, and right eye only is imported into right eye with imaging, and is not imported into left eye.Like this, system is input to beholder's left eye and right eye with image and right eye with image with left eye respectively, thereby realizes stereoscopic vision.
Simultaneously, initiatively stero is called as for example time-division system, and it is through synchronously using shutter glasses to be implemented to separating of left image and right image with the frame switching timing of image display device.Mechanism according to this system; Image display device shows that through between the image of each frame, switching left eye uses image with image and right eye; And the shutter glasses that the beholder wore is showing the right eye that covers the beholder during left eye is with image, and is showing the left eye that covers the beholder during right eye is with image.
In order to show 3D rendering, use the image of taking from a plurality of different points of view according to said system.For example, use by being used to take left eye with the camera L of image and be used to take right eye with the captured image of the camera R of image.
For example; When the image of taking by two camera L and R by via Network Transmission; And by receiving such as the remote image treatment facility of PC (personal computer) and TV (TV) and when showing, carrying out two processing that image produces video data of taking simultaneously by camera L and R through receiving reliably.Japanese Unexamined Patent Application for example open 2005-94073 number with 2006-140618 number, and in the communique of Japanese Unexamined Patent Application open (translation of PCT application) 11-504167 number such system has been described.
Yet, be difficult to prevent fully losing and postponing of data transmission in the network service.For example, there is such situation: received the image of taking at time t1 by camera L, but do not received the image taken of t1 at one time by camera R.In this case, the equipment that has received these data can not show correct 3D rendering.
For example, the equipment of video data has received the image of being taken at time t, t+1 etc. by camera L if receive also, but receives only the image of before time t-1 reaches, being taken by camera R, and then the image demonstration stops with the image of taking at time t-1.
Selectively, can carry out such processing: show the correct images of taking according to the time conversion of time t, t+1 or the like, simultaneously, continue to show the image of taking at time t-1 by camera R by camera L.Yet, show the correct stereoeffect that weakens 3D rendering such as the asynchronous treatment of picture of above-mentioned L image (left eye is used image) and R image (right eye is used image).
Summary of the invention
Consider that for example the problems referred to above have been made the present invention.In the present invention; Expectation provides a kind of image processing equipment, image processing method and program; Wherein, Via Network Transmission from a plurality of viewpoints, be used to form the image of 3-D view; For example left eye is used image with image and right eye, and is carried out in the system that 3-D view shows by the equipment that receives this image, and said image processing equipment, image processing method and program show three-dimensional (3D) image and two dimensional image (2D) image through switching between image according to the state from the state that obtains of the image of a plurality of viewpoints and this image.
Image processing equipment according to first embodiment of the invention comprises: receiving element, be configured to the received communication data, and communication data comprises L image (left eye is used image) and the R image (right eye is used image) that is applied to the 3-D view demonstration; The attribute information acquiring unit is configured to obtain the attribute information that comprises shooting time according to communication data; And output control unit, be configured to analyze the image and the attribute information that are included in the communication data, and carry out switching processing between 3-D view demonstration and two dimensional image demonstration based on the result who analyzes.If obtained pair of L image and the R image taken at same shooting time, then output control unit carries out the 3-D view demonstration.If do not obtain pair of L image and the R image taken at same shooting time; What then output control unit confirm to use whether the object image space error that in the 3-D view of L image that different shooting times are taken and R image shows, occurs be no more than setting in advance allows object image space error; If object image space error is no more than admissible error; Then use L image and the R image taken at different shooting times to carry out the 3-D view demonstration; And if object image space error then stops the 3-D view demonstration and carries out two dimensional image showing above admissible error.
In addition; In image processing equipment according to the embodiment of the invention; Output control unit can calculate among the motion object that in L image that different shooting times are taken and R image, is comprised, have the vectorial V of interframe movement of the motion object of maximum movement speed; Utilize allowing that binocular parallax shift amount δ Ws calculates and allow shooting time difference δ T on the display surface of motion vector V and 3-D view; If being no more than, L image of taking at different shooting times and the shooting time difference between the R image allow shooting time difference δ T; Then use the L image of taking at different shooting times to carry out 3-D view and show with the R image, and if the L image of taking at different shooting times and the shooting time difference between the R image surpass and allow shooting time difference δ T, then stop 3-D view and show and carry out the two dimensional image demonstration.
In addition, in the image processing equipment according to the embodiment of the invention, output control unit can obtain motion of objects speed Vs according to motion vector V, and carries out the processing that shooting time difference δ T is allowed in calculating according to expression formula δ T=δ Ws/Vs.
In addition; In image processing equipment according to the embodiment of the invention; Output control unit can obtain allowing on the x direction of allowing binocular parallax shift amount δ Ws on the display surface of 3-D view and allow binocular parallax shift amount δ Wsy on binocular parallax shift amount δ Wsx and the y direction; Obtain by movement velocity Vsx on the preparatory x direction of confirming of interframe movement vector V and the movement velocity Vsy on the y direction, and carry out and to allow that shooting time difference δ T is calculated as one processing less in the value of δ Wsx/Vsx and δ Wsy/Vsy.
In addition; In image processing equipment according to the embodiment of the invention; Output control unit can obtain in advance be provided with allow shooting time difference δ T; If being no more than, L image of taking at different shooting times and the shooting time difference between the R image allow shooting time difference δ T; Then use the L image of taking at different shooting times to carry out 3-D view and show with the R image, and if the L image of taking at different shooting times and the shooting time difference between the R image surpass and allow shooting time difference δ T, then stop 3-D view and show and carry out the two dimensional image demonstration.
In addition; In image processing equipment according to the embodiment of the invention; When stopping the 3-D view demonstration and carrying out two dimensional image showing; Output control unit can be included in the precedence information in the attribute information that is stored in the communication data through reference, and selects to have the image of high priority, carries out two dimensional image and shows.
In addition; Image processing method according to second embodiment of the invention is carried out by image processing equipment; And comprise step: receiving step, make communication unit received communication data, communication data comprises and is applied to L image (left eye is used image) and the R image (right eye is used image) that 3-D view shows; The attribute information obtaining step makes the attribute information acquiring unit obtain the attribute information that comprises shooting time according to communication data; And analysis and switch step, make the output control unit analysis be included in image and attribute information in the communication data, and carry out switching processing between 3-D view demonstration and two dimensional image demonstration based on the result who analyzes.If obtained pair of L image and the R image taken at same shooting time, then analyze and the demonstration of switch step execution 3-D view.If do not obtain pair of L image and the R image taken at same shooting time; Then analyze with switch step and confirm to use whether the object image space error of appearance in the 3-D view of L image that different shooting times are taken and R image shows be no more than setting in advance allows object image space error; If object image space error is no more than admissible error; Then use L image and the R image taken at different shooting times to carry out the 3-D view demonstration; And if object image space error then stops the 3-D view demonstration and carries out two dimensional image showing above admissible error.
In addition; Program according to third embodiment of the invention; It makes the image processing equipment carries out image processing, and said program may further comprise the steps: make communication unit received communication data, communication data comprises and is applied to L image (left eye is used image) and the R image (right eye is used image) that 3-D view shows; Make the attribute information acquiring unit obtain the attribute information that comprises shooting time according to communication data; And make the output control unit analysis be included in image and attribute information in the communication data, and carry out switching processing between 3-D view demonstration and two dimensional image demonstration based on the result who analyzes.If obtained pair of L image and the R image taken at same shooting time, then analyze and the demonstration of switch step execution 3-D view.If do not obtain pair of L image and the R image taken at same shooting time; Then analyze with switch step and confirm to use whether the object image space error of appearance in the 3-D view of L image that different shooting times are taken and R image shows be no more than setting in advance allows object image space error; If object image space error is no more than admissible error; Then use L image and the R image taken at different shooting times to carry out the 3-D view demonstration; And if object image space error then stops the 3-D view demonstration and carries out two dimensional image showing above admissible error.
Program according to the embodiment of the invention can be provided for: for example, and the storage of program that can be through computer-reader form is provided or image processing equipment or the computer system that communication media is carried out various program codes.Utilize this program that provides with computer-reader form, realize processing according to this program by image processing equipment or computer system.
In addition, according to the more how detailed explanation based on embodiments of the invention that describe below and accompanying drawing, the problem that the present invention will solve, feature and advantage will become clear.In this manual, system refers to the logic integrated structure of a plurality of equipment, and the equipment of each structure that is not limited to store in the same housing.
Structure according to the embodiment of the invention; The communication data that comprises L image (left eye is used image) and R image (right eye is used image) in reception; And based in three-dimensional (3D) image images displayed of the data execution treatment facility that is received; If obtained pair of L image and the R image taken at same shooting time, then carry out 3-D view and show.If do not obtain pair of L image and the R image taken at same shooting time, confirm then to use whether the object image space error of appearance in the 3-D view of L image that different shooting times are taken and R image shows be no more than setting in advance allows object image space error.Then, if object image space error is no more than admissible error, then use at the L image of different shooting times shootings and the 3-D view of R image to show.If object image space error surpasses admissible error, then stop 3-D view and show, and carry out the two dimensional image demonstration.Because this structure, even, also the image that does not cause factitious depth preception can be provided using in the 3-D view demonstration of pair of L image that different shooting times are taken and R image.
Description of drawings
Fig. 1 is the figure that is used to explain the topology example of overview that use is handled according to the image processing equipment of the embodiment of the invention and image processing equipment;
Fig. 2 is used to explain in embodiments of the present invention the figure of topology example that is transferred to the bag of image processing equipment from camera;
Fig. 3 is the figure that is used to explain according to the topology example of the image processing equipment of the embodiment of the invention;
Fig. 4 A and Fig. 4 B are used for the figure that three-dimensional (3D) image of explanation shows the example of handling;
Fig. 5 A and Fig. 5 B are used to explain that three-dimensional (3D) image of when not obtaining shooting time L synchronized with each other and R image, carrying out shows the figure of the example of handling;
Fig. 6 is used for explanation how to show the figure that watches object at three-dimensional (3D) image;
Fig. 7 is used for the figure that three-dimensional (3D) image of explanation shows the condition that will satisfy;
Fig. 8 is the figure that is used to explain the demonstration example that demonstration example that three-dimensional (3D) image that is presented at the synchronous images that same shooting time takes shows and three-dimensional (3D) image that uses the image of taking at different shooting times show;
Fig. 9 is used for explaining being used for confirming and showing which the figure of parameter with the demonstration of 2D image carrying out 3D rendering according to the image processing equipment of the embodiment of the invention;
Figure 10 illustrates to be used for explaining and to be used for confirming and to show which the figure of flow chart of processing sequence with the demonstration of 2D image carrying out 3D rendering according to the image processing equipment of the embodiment of the invention;
Figure 11 illustrates to be used for explaining and to be used for confirming and to carry out the figure that 3D rendering shows the flow chart of δ T computing included in which the processing sequence with the demonstration of 2D image according to the image processing equipment of the embodiment of the invention; And
Figure 12 illustrates to be used for explaining and to be used for confirming and to carry out the figure that 3D rendering shows the flow chart of δ Ws computing included in which the processing sequence with the demonstration of 2D image according to the image processing equipment of the embodiment of the invention.
Embodiment
The details of image processing equipment, image processing method and program according to the embodiment of the invention is described with reference to the drawings below.Description will be formed by following several: 1. according to the structure of the image processing equipment of the embodiment of the invention with handle general view; 2.3D image shows example and 3D rendering display condition, and 3. details by the processing of carrying out according to the image processing equipment of the embodiment of the invention.
[1. according to the structure of the image processing equipment of the embodiment of the invention with handle overview]
With reference to figure 1 and accompanying drawing subsequently, at first to describing according to the topology example of the image processing equipment of the embodiment of the invention and the general view of processing.Fig. 1 illustrates and is used for from a plurality of viewpoint photographic images to form camera L101 and R102, network 103 and the image processing equipment 120 of 3-D view (3D rendering).
Camera L101 takes the left eye that is used to form 3-D view (3D rendering) and uses image, and camera R102 takes the right eye that is used to form 3-D view (3D rendering) and uses image.Captured image is packed with the attribute data of view data, and be transferred to image processing equipment 120 via network 103.Image processing equipment 120 receives the bag with the R102 transmission from camera L101, obtains view data and it is presented on the display unit 124.
Image processing equipment 120 comprises: receiving element 121, packet analysis unit 122, output control unit 123, display unit 124, control unit 125 and memory 126.Receiving element 121 receives from the bag of camera L101 and R102 transmission.Analyze the bag that receives by packet analysis unit 122, and from bag, extract view data, attribute information etc.Attribute information comprises the for example shooting time information of each picture frame of taking.
The transfer of data of extracting is arrived output control unit 123, and utilize the image of taking by camera L101 and R102 that the 3D display image is presented on the display unit 124.According to the image processing equipment 120 of the embodiment of the invention based on the accepting state of the image of taking by camera L101 and R102 and the analysis of picture material, carry out three-dimensional (3D) image show with the demonstration of two dimension (2D) image between carry out switching processing.The details of this processing is described below.
Control unit 125 is carried out the integral body control of the processing of being carried out by receiving element 121, packet analysis unit 122, output control unit 123 and display unit 124.For example, control unit 125 is carried out control according to program stored in the memory 126.
As discussed previously, the 3D rendering display system comprises passive stero and active stero.Polarizing filter makes that only permission is passed through along the light that specific direction vibrates to passive stero through for example using, and uses image by beholder's the left eye and the left eye of eye viewing with image and right eye respectively to produce respectively.
Initiatively stero is called as for example time-division system, and it is through synchronously using shutter glasses to be implemented to separating of left image and right image with the frame switching timing of image display.
Passive stero all can be applicable to the image processing equipment 120 according to the embodiment of the invention with the active stero.Image processing equipment 120 according to the embodiment of the invention is right from the image that the bag that is sent by camera L101 and R102 obtains shooting at one time, and carries out the 3D rendering demonstration according to one of said system.
Yet in the bag transmission via network 103, losing and postponing of bag takes place with predetermined probability.Image processing equipment 120 execution according to the embodiment of the invention are used to handle the processing such as the situation of packet loss and delay.Specifically, the output control unit 123 of image processing equipment 120 is analyzed the state that obtains and the picture material of the image of being taken by camera L101 and R102.
For example, if it is right to confirm to receive the image of being taken at one time by camera L101 and R102, then carries out and use the right 3D rendering of this image to show.In addition; Take situation about not being received if taken place by camera L101 and R102 in one of paired image of same time; Then confirm take by camera L101 and R102 in the image of different shooting times be combined and situation about showing under, whether can show the 3D rendering of nature.
If can show the 3D rendering of nature, then utilize L and R image to carry out the 3D rendering demonstration with time difference.If confirm to be difficult to show the 3D rendering of nature, then stop 3D rendering and show, and utilize the image taken by camera L101 and one of the image taken by camera R102 shows the 2D image.
Fig. 2 is illustrated in the structure example of the bag that uses in the transmission of the image of being taken by camera L101 and R102.In Fig. 2, (a) illustrate from the structure example of the bag of camera L101 and R102 output.Camera L101 and R102 are provided with the bag with analog structure, and through image processing equipment 120 is appointed as the destination and bag is outputed to network 103.
Shown in Fig. 2 (a), bag has such structure: comprise the payload of a followed of address information etc. along with the real data that is set to transmit.Payload comprises: the image frame data item of a plurality of shootings, and with supplemental enhancement information) the corresponding attribute information of each picture frame (SEI:.Image frame data is stored as for example MPEG (Motion Picture Experts Group) coded data.
Shown in Fig. 2 (b); Attribute information (SEI) comprising: image information; Image information comprises as the group ID (identifier) of the identifier of photograph unit and image type (L image or R image) information etc., and in the image that the indication of image type information is taken by camera L101 and the image of camera R102 shooting which is respective image; Shooting time information; Allow the propagation delay time area of a room; Precedence information etc.
Shooting time information refers to camera L101 and the common temporal information of R102.For example, used by each camera, and be set as the shooting time of the picture frame of taking separately such as the standard time of obtaining one type temporal information via the internet.
Allow that the propagation delay time area of a room refers to: for example, that be provided with in advance or that be provided with by the user, expression is by the image of the camera of one of two cameras temporal information with respect to the delay allowance time of the image of being taken at one time by another camera.
Precedence information refers to: be recorded in image processing equipment 120 and stop that 3D rendering shows and carry out and preferably use when two dimension (2D) image shows by the L of two camera and which the information in the R image.
With each of the picture frame of taking the attribute information that comprises top item of information is set explicitly.Each the bag a plurality of picture frames of storage and attribute information item.
Fig. 2 illustrates SEI as the attribute information storage area.Yet the zone that is used for attribute information storage is not limited to SEI field as shown in Figure 2, and can be provided with in every way.For example, can revise this structure, make the field that setting can be stored arbitrary data in bag, user data fields for example is with attribute information storage in these fields.Also can revise structure, to use the attribute information storage package of separating with the image data storage bag.
Subsequently, with reference to figure 3, the internal structure of image processing equipment 120 and the details of processing thereof are described.The packet that comprises the photographed data that sends from camera L101 and R102 is received by the receiving element 121 of image processing equipment 120, and is imported into packet analysis unit 122.
As shown in Figure 3, packet analysis unit 122 comprises decoding unit 201 and attribute information acquiring unit 202.Decoding unit 201 is carried out for example mpeg encoded view data is carried out process of decoding, and decoded result is outputed to the reception Information Authentication unit 211 of output control unit 123.Attribute information acquiring unit 202 obtains and is stored in image type (L image or R image) information, the shooting time information in each bag and allows the propagation delay time area of a room; As with the previous corresponding attribute information of each picture frame with reference to figure 2 explanation, and these items of information are outputed to the reception Information Authentication unit 211 of output control unit 123.
The reception Information Authentication unit 211 of output control unit 123 confirms that through carrying out the image information of using from decoding unit 201 inputs, the attribute information of dependency information acquisition unit 202 inputs and the data processing that is stored in the parameter the memory 213 the output 3D rendering still is the 2D image.
According to the definite result who is made by reception Information Authentication unit 211,3D/2D switch control unit 212 shows 3D rendering or 2D image on display unit 124.
That is, as stated,, then show 3D rendering if it is right to have received the image of being taken at one time by camera L101 and R102.If do not receive one of paired image, and if confirm and then to utilize L and R image to show 3D rendering through taking the 3D rendering that shows nature in the combination of the image of different shooting times with shooting time difference.If confirming to be difficult to shows and then stop the 3D rendering of nature 3D rendering and show through being combined in image that different shooting times take, and utilize the image taken by camera L101 and one of the image taken by camera R102 shows the 2D image.For example, utilizing the image of selecting according to the precedence information that comprises in the previous attribute information with reference to figure 2 descriptions to carry out the 2D image shows.
[the 2.3D image shows example and 3D rendering display condition]
Subsequently, will explain that 3D rendering shows example and 3D rendering display condition.
With reference to figure 4A and Fig. 4 B, the example of the 3D rendering that on display unit 124, shows is described at first.The 3D rendering that Fig. 4 A and Fig. 4 B illustrate according to following system shows example: (1) is stero and (2) passive stero initiatively.
Shown in Fig. 4 A; According to (1) active stero;, and adorn oneself with beholder's the left eye and the beholder of the corresponding liquid crystal shutter glasses of right eye and use left eye and eye viewing L image and R image respectively as the L image of the image of taking by camera L101 with as the R image of the image of taking by camera R102 by time sequence Alternation Display.
Shown in Fig. 4 B,, output to of L and the R image construction of a two field picture of display unit by arranged alternate according to (2) passive stero.L and R image are polarization images.Through the polarising glass of being worn by the beholder, L image section and R image section are watched by beholder's left eye and right eye respectively.
According to (1) active stero, alternately export L and the R image of taking simultaneously.And, utilize L and the R image taken simultaneously to produce and export a two field picture according to (2) passive stero.
In the example shown in Fig. 4 A and Fig. 4 B, L and the R image taken to t03 at shooting time t01 have all been obtained.Utilize these images that obtains, the two shows and watches correct 3D rendering can to use system (1) and (2).
Yet, if the packet loss or the delay of storage one of L and R image have hindered each the L and the right demonstration processing of using as Fig. 4 A and Fig. 4 B shown in of R image of same shooting time shooting of arriving t03 at t01 in some cases.
Followingly object lesson is described with reference to figure 5A and Fig. 5 B.Fig. 5 A and Fig. 5 B are illustrated in and have obtained the L image of taking to t03 at shooting time t01, but do not obtain at the R image of time t0 2 with the t03 shooting, and have only obtained the example of the processing of when the R image of time t0 1 shooting, carrying out.
In this case, up to using L and the right 3D rendering of R image for example taken to show that system (1) and (2) can successfully be carried out 3D rendering and shown at shooting time t01.Yet, do not get access at time t0 2 and the R image of taking afterwards, show thereby hindered normal 3D rendering.
The example of the processing of carrying out in this case comprises: for example, (a) be presented at the image of taking up to the shooting time of t01, stop afterwards showing and wait for that L and the R image taken at shooting time t02 are right up to having obtained; And (b) be presented at the image of taking up to the shooting time of t01, in the R image that shooting time t01 takes, use the L image with the t03 shooting continuing to use afterwards at shooting time t02.The example of back has been shown in Fig. 5 A and Fig. 5 B.
If the processing of (a) above carrying out, then display image is discontinuous.At this moment,, then do not show correct 3D rendering, and export the non-natural image that lacks the correct depth effect in some cases if carry out the processing of (b).
In output control unit 123 according to the image processing equipment 120 of the embodiment of the invention; Receive Information Authentication unit 211 and analyze the information (image and attribute information) that receives, and confirm by camera L101 and R102 the image of different shooting times shootings be combined and situation about showing under whether can show the 3D rendering of nature.If can show the 3D rendering of nature, then utilize L and R image to show 3D rendering with shooting time difference.If confirm to be difficult to show the 3D rendering of nature, then stop the demonstration of 3D rendering, and utilize the image taken by camera L101 and one of the image taken by camera R102 shows the 2D image.
In the above described manner, carry out such processing according to the output control unit 123 of the image processing equipment 120 of the embodiment of the invention: confirm by camera L101 and R102 the image of different shooting times shootings be combined and situation about showing under whether can show the 3D rendering of nature.Before this concrete description of handling, the principle of the depth effect at first showing with reference to 6 pairs of acquisitions of figure 3D rendering is described.
Fig. 6 illustrates beholder's left eye 301 and right eye 302, display surface 310 and object image space 320.Display surface 310 for example is to show before with reference to the display surface of figure 4A to the 3D rendering of Fig. 5 B description, such as TV, display or screen.The position of the object that object image space 320 expression is felt by the beholder.
Display surface 310 shows same target in each of L image and R image, thereby this object is presented at the different display positions place of each image, that is, and and the L image object display position 311 and R image object display position 312 shown in the figure.The object at L image object display position 311 places is only watched by beholder's left eye 301, and the object at R image object display position 312 places is only watched by beholder's right eye 302.So, corresponding by the object image space 320 that the object's position of beholder's sensation is to that indicated in the drawings.
That is, there is distance B o beholder's sense object position apart from beholder's eyes.When eyes and this distance between objects are expressed as Do, and the distance table between eyes and the display surface is when being shown Ds, and the relation between Do and the Ds can use following expression (mathematic(al) representation 1) to represent.
Do=(We/ (We-Ws)) Ds (mathematic(al) representation 1)
Here, We representes beholder's left eye and the distance between the right eye, and Ws representes L image and the distance between the corresponding display position in the R image of same target on display surface.
For example, if obtained L image and the R image of taking simultaneously, the display surface 310 shown in Fig. 6 is in L image object display position 311 and R image object display position 312 place's display object.So the beholder can feel that this object is positioned at object image space 320 places.
When object is not the motion object; Even do not use L image and the R image taken simultaneously; And be to use the image of taking at different shooting times; The R image taken of the L image taken of shooting time t01 and shooting time t02 for example, the corresponding display position on the display surface 310 shown in Figure 6 (being L image object display position 311 and R image object display position 312) is also mobile.Therefore, in this case, even show that through being combined in the L image that shooting time t01 takes and carrying out 3D the 3D rendering that also can carry out nature shows at the R image that shooting time t02 takes.
Yet, if to liking the motion object, problem takes place.When to as if during the motion object, if L image and the R image of taking at different shooting times made up, the then change of the distance W s between L image object display position 311 and the R image object display position 312.So object image space 320 is forward or towards backward shift, and do not obtain the correct sensation of object's position.
For example; Shown in figure; If the admissible error of image space is represented as δ Do, then can according to allow image space error delta Do come on the calculation display surface 310 L image object display position 311 and the distance W s (being binocular parallax) between the R image object display position 312 allow shift amount δ Ws.
With reference to figure 7 this computational process is described below.As shown in Figure 7, allow image space error delta Do and allow that the relation between the binocular parallax shift amount δ Ws can be represented by following expression (mathematic(al) representation 2).
δ Ws=WeDs ((1/Do)-(1/ (Do-δ Do))) (mathematic(al) representation 2)
In addition, if δ Do is fully little with respect to Do, then top expression formula (mathematic(al) representation 2) can be represented by following expression (mathematic(al) representation 3).
δ Ws=WeDs (δ Do/Do 2) (mathematic(al) representation 3)
As stated, when the object in the image did not move, even use the image of taking at different shooting times, the L image object display position 311 shown in Fig. 6 did not change with R image object display position 312 yet.Yet when the object motion in the image, if use the image of taking at different shooting times, the L image object display position 311 shown in Fig. 6 changes with R image object display position 312.
Therefore, when image comprises the motion object, for example as with reference to the correct depth effect at the combined attenuation 3D rendering of the L image of different shooting times shootings and R image of figure 5A and Fig. 5 B description.
When the movement velocity of Vs indicated object in image, when δ T represented the shooting time difference between L image and the R image, because time interval δ T, object had moved the distance of Vs δ T in image.
Therefore; When top expression formula (mathematic(al) representation 2 or 3) allow that binocular parallax shift amount δ Ws is set as Vs δ T the time; If in image, have object, then can the shooting time difference of allowing between L image and the R image be confirmed as δ T with maximum movement speed Vs.
In image processing equipment 120 according to the embodiment of the invention; L image and R image and attribute informations are analyzed in the reception Information Authentication unit 211 of output control unit 123, obtain in the L image of same shooting time shooting and the combination of R image at first determining whether.
If obtained the L image and the combination of R image taken at same shooting time, then be based on the 3D rendering demonstration of the combination of L image that same shooting time takes and R image.At this moment,, then detect the object that in image, has maximum movement speed Vs, and allow shooting time difference δ T between calculating L image and the R image if do not obtain the L image of taking at same shooting time and the combination of R image.Have the L image that is no more than the shooting time difference of allowing shooting time difference δ T and the combination of R image if can export, then utilize to have the L image of this shooting time difference and 3D rendering is exported in the combination of R image.
At this moment, have the L image that is no more than the shooting time difference of allowing shooting time difference δ T and the combination of R image, then carry out showing switching to the processing that the 2D image shows from 3D rendering if confirm to be difficult to export.In this case, utilize basis for example to be included in the previous image of selecting with reference to the precedence information in the attribute information of figure 2 descriptions and carry out the demonstration of 2D image.Selectively, utilize in L image and the R image undelayed one to export the 2D image with image and left eye with image as right eye.
With reference to figure 8, below the object lesson of motion object in the key diagram picture.Fig. 8 is illustrated in (a) and uses the synchronous L image taken at same shooting time and the 3D rendering of R image to show example, and (b) uses the example of image in the 3D rendering demonstration example of asynchronous L image that different shooting times are taken and R image.
In each example, circular object is moved along the object trajectory of being represented by camber line 370.
Use in the synchronous demonstration example of L image that same shooting time is taken and R image at (a), L image object display position 351 and R image object display position 352 are corresponding with previous L image object display position 311 and the R image object display position of explaining with reference to figure 6 312 respectively.That is, L image object display position 351 is illustrated in the L image of same shooting time shooting and the object display position separately of R image with R image object display position 352.The beholder uses the object separately at left eye and these display position places of eye viewing respectively.Therefore, the beholder can discern the normal subjects position.
Distance between L image object display position 351 and the R image object display position 352, promptly binocular parallax is represented as Wsx.According to the expression formula (mathematic(al) representation 1) of front, object's position can be identified as the position that is positioned at a distance of object distance Do.
At this moment, (b) use shows that at the asynchronous L image of different shooting times shootings and the 3D rendering of R image example illustrates: at for example L image object display position 361 and the R image object display position 362 in the combination of the L image that the R image and the shooting time t01 in front of shooting time t03 shooting take.
In this case, as shown in the figure, the distance table on the x direction between L image object display position 361 and the R image object display position 362 is shown Wsx+ δ Wsx.That is the binocular parallax Wsx that, appears in the combination of original synchronous L image and R image adds error delta Wsx.Should additional value be the factor that the object image space relative to the beholder is moved forward or backward, that is, reduce or increased previous eyes of explaining with reference to figure 6 and the factor of distance between objects Do.
Be provided with in advance according to the image processing equipment 120 of the embodiment of the invention and allow image space error delta Do, and confirm to be based in the 3D rendering demonstration of combination of L image that different shooting times take and R image the displacement of object image space and whether be no more than and allow image space error delta Do.Allow image space error delta Do if the displacement of object image space is no more than, then use this that the 3D rendering of L image with R image (pair of L and the R image promptly taken at different shooting times) shown.Allow image space error delta Do if the displacement of object image space surpasses, then stop the processing that 3D rendering shows and switch to the demonstration of 2D image.
For example; Suppose now: up to time t0 1, the L image and the R image that all are utilized in same shooting time shooting carry out the 3D demonstration, after this use one of L image and R image; For example, continue to use the R image of taking at shooting time t01 simultaneously at the L image of shooting time t02 and t03 shooting.In addition, suppose that object shown in Fig. 8 moves right along object trajectory 370.In this case, have only the object of L image to move right, and does not move the position of the object of R image.In this case, beholder's object shown in the image pattern 8 of feeling all right moves closer to the beholder.Afterwards; If obtained L image and the R image for example taken at time t0 5; And if be based on L image that shooting time t05 takes and the 3D rendering of R image shows, then the beholder feel all right picture little by little near beholder's object suddenly mobile backward.
Use the object image space error of taking in the 3D rendering of the L of different shooting times image and R image shows to be no more than to allow object image space error delta Do in order to prevent that the beholder from feeling that this factitious object moves, should to control to appear at reference to what figure 6 explained.
If the moving direction component of object comprises the y durection component, then each display position of L image object display position 361 and R image object display position 362 also is shifted along the y direction.That is, shown in Fig. 8 (b), the distance table between L image object display position 361 and the R image object display position 362 on the y direction is shown Wsy+ δ Wsy, and is as shown in the figure.
Displacement on the y direction causes taking place the ghost image (double blurring) of object.Therefore; Carry out such processing: also on the y direction, be provided with in advance and allow shift amount δ Wsy, and the object display position in the L image that will use and R image is no more than in the displacement on the y direction and carries out 3D rendering under the situation of allowing shift amount δ Wsy and show processing.
Specifically, for example, such structure is provided: this structure setting or calculate two feasible value δ Wsx and δ Wsy, and through using less in these two feasible values one to carry out 3D/2D image switching and confirm to handle.Should specifically handle example below with reference to flowchart text.Displacement on the y direction causes taking place the ghost image problem of object.Therefore, can be with this structural modification: under the situation of the displacement on the y direction, stop 3D and show and switch to 2D and show detecting the object display position.
The details of the processing of carrying out according to the image processing equipment of the embodiment of the invention [3. by]
Subsequently, will be with reference to figure 9 and description of drawings subsequently details by the processing of carrying out according to the image processing equipment 120 of the embodiment of the invention.
As stated; In image processing equipment 120 according to the embodiment of the invention; L image and R image and attribute informations are analyzed in the reception Information Authentication unit 211 of output control unit 123, can obtain in the L image of same shooting time shooting and the combination of R image at first determining whether.If can obtain the L image and the combination of R image taken at same shooting time, then be based on the 3D rendering demonstration of the combination of L image that same shooting time takes and R image.
At this moment,, then detect the object that in image, has maximum movement speed Vs, and allow shooting time difference δ T between calculating L image and the R image if be difficult to obtain the L image and the combination of R image taken at same shooting time.Have the L image that is no more than the shooting time difference of allowing shooting time difference δ T and the combination of R image if can export, then utilize this to have the L image of shooting time difference and the array output 3D rendering of R image.
At this moment, have the L image that is no more than the shooting time difference of allowing shooting time difference δ T and the combination of R image, then carry out showing switching to the processing that the 2D image shows from 3D rendering if confirm to be difficult to export.In this case, utilizing the selected image of precedence information that comprises in the for example previous attribute information with reference to figure 2 explanations of basis to carry out the 2D image shows.Selectively, utilize in L image and the R image undelayed one to export the 2D image with image and left eye with image as right eye.
Obtain or parameters calculated by receiving Information Authentication unit 211 with reference to figure 9 explanations below.Information Authentication unit 211 obtains or parameters calculated is following by receiving: the distance B o between (1) eyes and the object (image space); (2) allow image space error delta Do; (3) the distance B s between eyes and the display surface; (4) the distance W e between the eyes, (5) binocular parallax (L image and R image between the object display position on the display surface poor) Ws, (6) interframe movement vector (maximum) V; (7) allow binocular parallax shift amount δ Ws, and L and R image Displaying timer residual quantity (allowing that shooting time is poor) δ T are allowed in (8).
To explain and calculate or obtain the example of top parameter (1) to the processing of (8).According to the expression formula (mathematic(al) representation 1) of previous explanation, utilize distance B s and the distance W e between the eyes between binocular parallax Ws, eyes and the display surface to calculate the distance B o between eyes and the object (image space).(2) allow distance B s between image space error delta Do, (3) eyes and the display surface, and the value separately of the distance W e between (4) eyes is provided with in advance and is stored in the memory 213.
Based on the distance B s between eyes and the display surface and to the analysis of the image that receives, calculate the value of (5) binocular parallax (L image and R image between the object display position on the display surface poor) We.Based on value to analytical calculation (6) interframe movement vector (maximum) V of the image that receives.Interframe movement vector (maximum) V comprises: the motion of objects speed Vs that between each frame, moves with maximal rate and the information of the direction of motion.
Through the expression formula (mathematic(al) representation 2 or 3) of using previous explanation, utilize distance B o between distance B s, eyes and the object (image space) between eyes and the display surface, allow that the distance W e between image space error delta Do and the eyes calculates the value that binocular parallax shift amount δ Ws is allowed in (7).Selectively, can revise this structure, be stored in the memory 213 with the fixed value that is provided with in advance that will allow binocular parallax shift amount δ Ws, and use the value of this storage.
Utilize above-mentioned interframe movement vector (maximum) V that allows binocular parallax shift amount δ Ws and pass through image analysis calculation, calculate the value that L and R image Displaying timer residual quantity (allowing that shooting time is poor) δ T are allowed in (8).That is, this value is calculated according to expression formula δ T=δ Ws/Vs, and wherein, Vs representes the size of interframe movement vector (maximum) V, i.e. motion of objects speed.
Display image below with reference to Figure 10 is carried out by output control unit 123 to flowchart text shown in Figure 12 is confirmed sequence.Figure 10 is used to explain the overall flow chart of being confirmed sequence by the display image of output control unit 123 execution.Figure 11 is the flow chart of the details that is used to explain the processing of the S103 of step shown in Figure 10 (promptly calculating the sequence of allowing L and R image Displaying timer residual quantity δ T).Figure 12 is the flow chart of the details that is used to explain the processing of the S203 of step shown in Figure 11 (promptly calculating the sequence of allowing binocular parallax shift amount δ Ws).
With reference to the flow chart shown in Figure 10, at first carry out the display image of being carried out by output control unit 123 is confirmed the overall explanation of sequence.At step S101, the reception Information Authentication unit 211 of output control unit 123 determines whether that the synchronous images that can carry out L image and R image shows.That is, receiving Information Authentication unit 211 determines whether under the situation that does not have packet loss or delay, to obtain and to be presented at L image and the R image that same shooting time is taken.If confirming the synchronous images that can carry out L image and R image shows; Then sequence advances to step S106; To be utilized in the L image and the R image of same time shooting, carry out 3D rendering demonstration processing according to previous active stero or passive stero with reference to figure 4A and Fig. 4 B explanation.
In definite result of step S101 whether if, that is, if confirm that the reception of L image or R image is postponed, and therefore be difficult to carry out the synchronous demonstration of L image and R image, then sequence proceeds to step S102.
At step S102, determine whether to obtain according to the L image that obtains and R attributes of images information and allow L and R image Displaying timer residual quantity (allowing that shooting time is poor) δ T.Allow L and R image Displaying timer residual quantity (allowing that shooting time is poor) δ T and be included in reference to allowing that the propagation delay time area of a room is corresponding in the attribute information of the bag of figure 2 explanations.
If obtained according to the attribute information of the bag that is received and to have allowed the propagation delay time area of a room, the value that then will allow the propagation delay time area of a room is set at allows L and R image Displaying timer residual quantity (allowing that shooting time is poor) δ T.
At this moment, do not allow the propagation delay time area of a room if obtain according to the receive attribute information that wraps, then sequence proceeds to step S103 to carry out δ T computing.
After the processing of obtaining or calculating δ T, sequence proceeds to step S104.At step S104, the L image that will use and the time difference of the shooting time between the R image compare with the shooting time difference δ T that allows that institute obtains or calculates.
The L image that will use and the time difference of the shooting time between the R image are calculated as poor between the shooting time that comprises in the attribute information with reference to the bag of figure 2 explanation.
Allow shooting time difference δ T if the L image of confirming to use at step S104 and the time difference of the shooting time between the R image are no more than, then in definite result of step S104 for being.In this case, confirm can not cause the factitious sensation of essence based on three-dimensional (3D) the image demonstration of the combination of these L images and R image.Then, sequence proceeds to step S106, in order to having L image and the R image that is no more than the shooting time difference of allowing shooting time difference δ T, carries out 3D rendering demonstration processing according to previous active stero or passive stero with reference to figure 5A and Fig. 5 B explanation.
At this moment, allow shooting time difference δ T if the time difference of L image of confirming to use at step S104 and the shooting time between the R image surpasses, then in definite result of step S104 for not.In this case, if confirm to carry out three-dimensional (3D) image demonstration, then can cause the factitious sensation of essence based on the combination of these L images and R image.Then, sequence proceeds to step S105, shows to stop the 3D rendering demonstration and to switch to the 2D image.Specifically, utilizing the selected image of precedence information that comprises in the for example previous attribute information with reference to figure 2 explanations of basis to carry out the 2D image shows.Selectively, utilize in L image and the R image undelayed one to export the 2D image with image and left eye with image as right eye.
To promptly, calculate the sequence of allowing L and R image Displaying timer residual quantity (allowing that shooting time is poor) δ T with reference to the details of the processing of the step S103 in the flow chart of Figure 11 key diagram 10.
At step S201, according to each picture frame calculating largest motion vector V of L or R image.This handles the continuous frame that uses one of L image and R image.
Based on the state of continuous frame analysis, and calculate largest motion vector V with the interframe movement of the object of maximal rate motion.Vector V comprises the information of the motion of objects speed Vs and the direction of motion.The time interval according between move distance and each frame can be calculated movement velocity.Vector V is expressed as that (wherein Vx and Vy represent object motion speed and the object motion speed on the y direction on the x direction respectively for Vx, bivector Vy).
Then, determine whether to obtain at step S202 and allow binocular parallax shift amount δ Ws.Allow and to be used as in advance by binocular parallax shift amount δ Ws settings is stored in the memory 213.In this case, obtain from memory 213 and to allow binocular parallax shift amount δ Ws, and sequence proceeds to step S204.
At this moment, be not set to be stored in the value in the memory 213 if allow binocular parallax shift amount δ Ws, then sequence proceeds to step S203, allows binocular parallax shift amount δ Ws with calculating.Afterwards, sequence proceeds to step S204.To explain after a while at step S203 and calculate the processing of allowing binocular parallax shift amount δ Ws.
At step S204, L and R image Displaying timer residual quantity (allowing that shooting time is poor) δ T are allowed in calculating according to following expression (mathematic(al) representation 4).
δ T=min (δ Wsx/Vx, δ Wsy/Vy) (mathematic(al) representation 4)
In the superincumbent expression formula, δ Wsx representes to allow the x durection component of binocular parallax shift amount δ Ws, and δ Wsy representes to allow the y durection component of binocular parallax shift amount δ Ws.In addition, (a, b) expression is to the less value of each selection of a and b for min.
As previous said with reference to figure 8, respectively as allowing that the x durection component of binocular parallax shift amount δ Ws and the δ Wsx and the δ Wsy of y durection component are such values: they represent displacement and the displacement on the y direction on the x direction that each the object display position from synchronous L image and R image begins respectively.
At least the y durection component δ Wsy that allows binocular parallax shift amount δ Ws is used as predetermined preset value and is stored in the memory 213.Can the x durection component δ Wsx that allow binocular parallax shift amount δ Ws be stored in the memory 213, or it be calculated at step S203.
Subsequently, will details that calculate the processing of allowing binocular parallax shift amount δ Ws at step S203 be described with reference to the flow chart shown in Figure 12.
At step S301, at first obtain Ws, its expression has poor between the original position separately of L image and R image of object of the largest motion vector V that calculates at the step S201 place in the aforementioned flow process of Figure 11.L image used herein and R image take, will be applied to L image and R image that 3D rendering shows at different shooting times right, that is, and and the L image and the R image of the different shooting times shootings of formerly explain with reference to (b) of figure 8.
At step S302, calculate eyes and distance between objects Do according to previous described expression formula (mathematic(al) representation 1).That is, calculate eyes and distance between objects Do according to following expression.
Do=(We/ (We-Ws)) Ds (mathematic(al) representation 1)
Here, We representes beholder's left eye and the distance between the right eye, and Ws representes the distance between the corresponding display position of L image and the same target in the R image on the display surface, and Ds representes the distance between eyes and the display surface.Use is stored in We and the value of Ds in the memory 213.
Then, at step S303, calculate and to allow binocular parallax shift amount δ Wsx on the x direction.Carry out the processing of allowing binocular parallax shift amount δ Wsx of calculating on the x direction through the expression formula (mathematic(al) representation 2) of using previous explanation.That is, calculate and allow binocular parallax shift amount δ Wsx on the x direction through using following expression.
δ Ws=WeDs ((1/Do)-(1/ (Do-δ Do))) (mathematic(al) representation 2)
Here, We representes beholder's left eye and the distance between the right eye, and Ds representes the distance between eyes and the display surface, and Do representes eyes and distance between objects, and δ Do representes to allow the image space error.The value of We, Ds and δ Do is provided with in advance, and uses them to be stored in the value in the memory 213.In addition, will use as Do in the value that step S302 calculates.
Can revise this structure calculates and allows binocular parallax shift amount δ Wsx on the x direction to use another expression formula (mathematic(al) representation 3) through the expression formula (mathematic(al) representation 2) above replacing.That is, can calculate and allow binocular parallax shift amount δ Wsx on the x direction through using following expression.
δ Ws=WeDs (δ Do/Do 2) (mathematic(al) representation 3)
Calculate in the above described manner and allow binocular parallax shift amount δ Wsx on the x direction.Then, at the step S204 shown in Figure 11, with among δ Wsx/Vx and the δ Wsy/Vy less one be set at and allow L and R image Displaying timer residual quantity (allowing that shooting time is poor) δ T.
At the step S104 of Figure 10, calculate δ T through top processing, that is, will allow that L and R image Displaying timer residual quantity (allowing that shooting time is poor) δ T compare with shooting time difference Δ T between L image that will show and the R image.That is, determine whether to satisfy following expression (mathematic(al) representation 5).
L image that shows and the shooting time difference Δ T between the R image≤allow shooting time difference δ T (mathematic(al) representation 5)
If the expression formula (mathematic(al) representation 5) above satisfying is then confirmed can not cause factitious sensation to the beholder, and utilize the L image and the R image that will show to carry out 3D demonstration (step S106).If the expression formula (mathematic(al) representation 5) above not satisfying; Then confirm and to cause factitious sensation to the beholder; And stop using the 3D of the L image that will show and R image to show, and utilize one of L image and R image as carrying out 2D image demonstration (step S105) by the image that eyes are watched.
As stated; Structure is according to the image processing equipment 120 of the embodiment of the invention; Make: if be difficult to obtain synchronous L image and the R image of shooting time each other; Then based on the motion of objects state that is included in the image; Come to confirm use the displacement of the object image space that in the 3D rendering of L image that different shooting times are taken and R image shows, occurs or the displacement of binocular parallax whether to be no more than predefined feasible value, and if this displacement be no more than this feasible value, then carry out this 3D rendering demonstration.Therefore, even be difficult to obtain synchronous L image and the R image of shooting time each other, the 3D rendering that also can carry out nature shows.
Promptly; In processing according to the application's inventive embodiment; In the combination that can obtain L image and R image; Wherein, be no more than respectively in the displacement of the displacement of using the object image space that in the 3D rendering of L image that different shooting times are taken and R image shows, occurs and binocular parallax and allow image space error delta Do and allow under the situation of binocular parallax shift amount δ Ws, carry out the 3D rendering demonstration.
Allow binocular parallax shift amount δ Ws and allow that L and R image Displaying timer residual quantity (allowing that shooting time is poor) δ T have the relation of being represented by expression formula δ Ws=Vs δ T, wherein Vs representes the motion motion of objects speed with the maximal rate motion.
Therefore; Allowed L and R image Displaying timer residual quantity (allowing that shooting time is poor) δ T if before provide; Then can confirm to carry out 3D rendering and show still to be to switch to the 2D image to show separately through the comparison of the shooting time difference between δ T and L that will use and the R image.
Do not allow L and R image Displaying timer residual quantity (allowing that shooting time is poor) δ T even before provide, can promptly calculate δ T yet through the processing of step S103 shown in Figure 10 with reference to Figure 11 and the described processing of Figure 12.That is, utilize interframe movement vector (maximum) V that allows binocular parallax shift amount δ Ws and pass through image analysis calculation, calculate δ T according to expression formula δ T=δ Ws/Vs.Here, Vs representes the size of interframe movement vector (maximum) V, that is, and and motion of objects speed.In flow chart, the structure of independent calculating and definite x component and y component is illustrated.Yet, can revise this structure, with under the situation that is not divided into x component and y component, calculate δ T, and carry out definite based on the value of the δ T that calculates according to top expression formula.
Therefore,, allow image space error delta Do, can determine whether that also the image that can have the error that is no more than the object image space of allowing image space error delta Do shows if be provided with even when δ T is not set to preset value.Therefore; If confirm that based on this result uses the 3D rendering demonstration at the L image and the R image of different shooting times shootings; Then guarantee to allow image space error delta Do, and can watch the 3D rendering of nature by showing that the error of handling the object image space that is caused is no more than.
Such structure has been described: this structure is obtained some in the value of using each shown in the flow chart of Figure 12 explanation calculated with reference to Figure 10 from memory 213, and this structure is obtained other value in the said value from the attribute information that is included in from the bag of camera L101 and R102 reception.Can revise this structure,, perhaps obtain said value from external server to store all these in the value from the bag of camera L101 and R102 transmission.
In addition, a series of processing of explaining in the specification can be carried out by hardware, software or the combining structure of the two.In order to make reason software executing everywhere, install in the memory of computer that can be in being contained in specific hardware and executive logging is handled the program of sequence, perhaps, can in the all-purpose computer that can carry out various processing, install and carry out this program.For example, can program be recorded in the recording medium in advance.Can program be installed in the computer from recording medium, and also can receive program, and it is installed in the recording medium such as internal hard drive via network such as LAN (local area network (LAN)) and internet.
Various processing described in the specification not only can be carried out according to specification in chronological order, also can suitably or according to the disposal ability of carrying out the equipment of handling come concomitantly or execution individually.In addition, in this manual, system refers to the logical collection structure of a plurality of equipment, and is not limited to be stored in the equipment with each self-structure of same housing.
The application comprises and relates on June 19th, 2009 at Japan that Japan Patent office submits to theme of open theme among the patent application JP2009-145987 formerly, and its whole contents is contained in this by reference.
Describe the present invention in detail with reference to specific embodiment.Yet obviously, those skilled in the art can revise or change each embodiment in the scope of not leaving main idea of the present invention.That is, exemplaryly disclose the present invention, therefore should not make restrictive explanation.In order to understand main idea of the present invention, should be with reference to claims.

Claims (7)

1. image processing equipment comprises:
Receiving element is configured to the received communication data, said communication data comprise be applied to that 3-D view shows as left eye with the L image of image with as the R image of right eye with image;
The attribute information acquiring unit is configured to obtain the attribute information that comprises shooting time according to said communication data; And
Output control unit is configured to analyze the said image and the said attribute information that are included in the said communication data, and carries out switching processing between 3-D view demonstration and two dimensional image demonstration based on the result who analyzes,
Wherein, if obtained pair of L and the R image of taking at same shooting time, then said output control unit carries out 3-D view and shows, and
Wherein, If do not obtain pair of L and the R image taken at same shooting time; What then said output control unit confirm to use whether the object image space error that in the 3-D view of L image that different shooting times are taken and R image shows, occurs be no more than setting in advance allows object image space error; If said object image space error is no more than the said object image space error of allowing; Then use the said L image of taking at different shooting times to carry out 3-D view and show with the R image, and if said object image space error surpasses the said object image space error of allowing, then stop 3-D view and show and carry out the two dimensional image demonstration.
2. image processing equipment according to claim 1,
Wherein, Said output control unit calculates among the motion object contained in L image that different shooting times are taken and R image, have the vectorial V of interframe movement of the motion object of maximum movement speed; Utilize allowing that binocular parallax shift amount δ Ws calculates and allow shooting time difference δ T on the display surface of said motion vector V and 3-D view; If the L image and the shooting time difference between the R image of taking at different shooting times are no more than the said shooting time difference δ T that allows; Then use said L image and the R image taken at different shooting times to carry out the 3-D view demonstration; And if surpass the said shooting time difference δ T that allows in L image and the shooting time difference between the R image that different shooting times are taken, then stop 3-D view and show and carry out two dimensional image and show.
3. image processing equipment according to claim 2,
Wherein, said output control unit obtains said motion of objects speed Vs according to said motion vector V, and carries out the processing that shooting time difference δ T is allowed in calculating according to expression formula δ T=δ Ws/Vs.
4. image processing equipment according to claim 2,
Wherein, Said output control unit obtains allowing on the x direction of allowing binocular parallax shift amount δ Ws on the display surface of 3-D view and allows binocular parallax shift amount δ Wsy on binocular parallax shift amount δ Wsx and the y direction; Obtain by movement velocity Vsx on the preparatory x direction of confirming of said interframe movement vector V and the movement velocity Vsy on the y direction, and carry out and to allow that shooting time difference δ T is calculated as one processing less in the value of δ Wsx/Vsx and δ Wsy/Vsy.
5. image processing equipment according to claim 1,
Wherein, Said output control unit obtain in advance be provided with allow shooting time difference δ T; If the L image and the shooting time difference between the R image of taking at different shooting times are no more than the said shooting time difference δ T that allows; Then use said L image and the R image taken at different shooting times to carry out the 3-D view demonstration; And if surpassed the said shooting time difference δ T that allows in L image and the shooting time difference between the R image that different shooting times are taken, would then stop 3-D view and show and carry out two dimensional image and show.
6. image processing equipment according to claim 1,
Wherein, When stopping the 3-D view demonstration and carrying out two dimensional image showing; Said output control unit also selects to have the image of high priority with reference to being stored in precedence information contained in the attribute information in the said communication data, carries out said two dimensional image and shows.
7. image processing method of carrying out by image processing equipment, said image processing method comprises step:
Receiving step makes communication unit received communication data, said communication data comprise be applied to that 3-D view shows as left eye with the L image of image with as the R image of right eye with image;
The attribute information obtaining step makes the attribute information acquiring unit obtain the attribute information that comprises shooting time according to said communication data; And
The output controlled step makes the output control unit analysis be included in said image and said attribute information in the said communication data, and carries out switching processing between 3-D view demonstration and two dimensional image demonstration based on the result who analyzes,
Wherein, if obtained pair of L and the R image of taking at same shooting time, then said output controlled step is carried out 3-D view and is shown, and
Wherein, If do not obtain pair of L and the R image taken at same shooting time; What then said output controlled step confirm to use whether the object image space error that in the 3-D view of L image that different shooting times are taken and R image shows, occurs be no more than setting in advance allows object image space error; If said object image space error is no more than the said object image space error of allowing; Then use the said L image of taking at different shooting times to carry out 3-D view and show with the R image, and if said object image space error surpasses the said object image space error of allowing, then stop 3-D view and show and carry out the two dimensional image demonstration.
CN2010101815482A 2009-06-19 2010-05-20 Image processing apparatus, and image processing method Expired - Fee Related CN101931824B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2009-145987 2009-06-19
JP2009145987A JP5299111B2 (en) 2009-06-19 2009-06-19 Image processing apparatus, image processing method, and program

Publications (2)

Publication Number Publication Date
CN101931824A CN101931824A (en) 2010-12-29
CN101931824B true CN101931824B (en) 2012-11-28

Family

ID=42331082

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101815482A Expired - Fee Related CN101931824B (en) 2009-06-19 2010-05-20 Image processing apparatus, and image processing method

Country Status (5)

Country Link
US (1) US8451321B2 (en)
EP (1) EP2265031B1 (en)
JP (1) JP5299111B2 (en)
CN (1) CN101931824B (en)
AT (1) ATE536704T1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2967324B1 (en) * 2010-11-05 2016-11-04 Transvideo METHOD AND DEVICE FOR CONTROLLING THE PHASING BETWEEN STEREOSCOPIC CAMERAS
JP2012129768A (en) * 2010-12-15 2012-07-05 Seiko Epson Corp Document camera, document camera control method, program, and display processing system
US20130120528A1 (en) * 2011-01-09 2013-05-16 Thomson Licensing Video processing apparatus and method for detecting a temporal synchronization mismatch
JP2012175633A (en) * 2011-02-24 2012-09-10 Fujifilm Corp Image display device, method, and program
JP5092033B2 (en) * 2011-03-28 2012-12-05 株式会社東芝 Electronic device, display control method, and display control program
JP5735330B2 (en) * 2011-04-08 2015-06-17 株式会社ソニー・コンピュータエンタテインメント Image processing apparatus and image processing method
US8643699B2 (en) * 2011-04-26 2014-02-04 Mediatek Inc. Method for processing video input by detecting if picture of one view is correctly paired with another picture of another view for specific presentation time and related processing apparatus thereof
JP6058257B2 (en) * 2011-07-06 2017-01-11 アイキューブド研究所株式会社 Image output apparatus, image output method, and program
ITTO20120208A1 (en) * 2012-03-09 2013-09-10 Sisvel Technology Srl METHOD OF GENERATION, TRANSPORT AND RECONSTRUCTION OF A STEREOSCOPIC VIDEO FLOW
FR2993739B1 (en) * 2012-07-19 2015-04-10 Transvideo "STEREOSCOPIC VIEWING METHOD AND DISPLAY DEVICE IMPLEMENTING SUCH A METHOD"
US9008427B2 (en) 2013-09-13 2015-04-14 At&T Intellectual Property I, Lp Method and apparatus for generating quality estimators

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101006733A (en) * 2004-08-18 2007-07-25 夏普株式会社 Image data display apparatus

Family Cites Families (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3332575B2 (en) * 1994-05-23 2002-10-07 三洋電機株式会社 3D video playback device
US5661518A (en) 1994-11-03 1997-08-26 Synthonics Incorporated Methods and apparatus for the creation and transmission of 3-dimensional images
JP3634677B2 (en) 1999-02-19 2005-03-30 キヤノン株式会社 Image interpolation method, image processing method, image display method, image processing apparatus, image display apparatus, and computer program storage medium
JP3992533B2 (en) * 2002-04-25 2007-10-17 シャープ株式会社 Data decoding apparatus for stereoscopic moving images enabling stereoscopic viewing
JP3778893B2 (en) * 2002-11-19 2006-05-24 株式会社ソフィア Game machine
JP2004357156A (en) * 2003-05-30 2004-12-16 Sharp Corp Video reception apparatus and video playback apparatus
JP4238679B2 (en) 2003-09-12 2009-03-18 ソニー株式会社 Video recording / playback device
JP2006140618A (en) 2004-11-10 2006-06-01 Victor Co Of Japan Ltd Three-dimensional video information recording device and program
US7486981B2 (en) * 2004-11-15 2009-02-03 Given Imaging Ltd. System and method for displaying an image stream
JP4160572B2 (en) * 2005-03-31 2008-10-01 株式会社東芝 Image processing apparatus and image processing method
KR100828358B1 (en) * 2005-06-14 2008-05-08 삼성전자주식회사 Method and apparatus for converting display mode of video, and computer readable medium thereof
JP4912224B2 (en) * 2007-06-08 2012-04-11 キヤノン株式会社 Image display system and control method thereof
JP4681595B2 (en) 2007-12-11 2011-05-11 株式会社日立情報システムズ Information tracing system, information tracing method, and information tracing program
JP4657313B2 (en) * 2008-03-05 2011-03-23 富士フイルム株式会社 Stereoscopic image display apparatus and method, and program
US20100194860A1 (en) * 2009-02-03 2010-08-05 Bit Cauldron Corporation Method of stereoscopic 3d image capture using a mobile device, cradle or dongle

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101006733A (en) * 2004-08-18 2007-07-25 夏普株式会社 Image data display apparatus

Also Published As

Publication number Publication date
EP2265031A1 (en) 2010-12-22
US8451321B2 (en) 2013-05-28
JP5299111B2 (en) 2013-09-25
ATE536704T1 (en) 2011-12-15
CN101931824A (en) 2010-12-29
US20100321472A1 (en) 2010-12-23
JP2011004203A (en) 2011-01-06
EP2265031B1 (en) 2011-12-07

Similar Documents

Publication Publication Date Title
CN101931824B (en) Image processing apparatus, and image processing method
JP5732888B2 (en) Display device and display method
EP1836859B1 (en) Automatic conversion from monoscopic video to stereoscopic video
CN102170577B (en) Method and system for processing video images
KR101185870B1 (en) Apparatus and method for processing 3 dimensional picture
JP4251952B2 (en) Stereoscopic image display apparatus and stereoscopic image display method
EP1967016B1 (en) 3d image display method and apparatus
EP2659680B1 (en) Method and apparatus for providing mono-vision in multi-view system
CN102932662B (en) Single-view-to-multi-view stereoscopic video generation method and method for solving depth information graph and generating disparity map
CN104539929A (en) Three-dimensional image coding method and coding device with motion prediction function
KR20120030005A (en) Image processing device and method, and stereoscopic image display device
KR101994322B1 (en) Disparity setting method and corresponding device
WO2014136144A1 (en) Image display device and image display method
WO2008122838A1 (en) Improved image quality in stereoscopic multiview displays
JP4320271B2 (en) 3D image display method
KR101826025B1 (en) System and method for generating 3d image contents that user interaction is possible
WO2013042392A1 (en) Three-dimensional image evaluation device
KR101978790B1 (en) Multi View Display Device And Method Of Driving The Same
JP2012134885A (en) Image processing system and image processing method
JP5700998B2 (en) 3D image display apparatus and control method thereof
WO2024176749A1 (en) Information processing device, stereoscopic video display system, and program
JP2012142800A (en) Image processing device, image processing method, and computer program
Lee et al. Position Prediction for Eye-tracking based 3D Display
KR101582131B1 (en) 3 3D real image displayable system
KR20050100895A (en) Apparatus and method for switching multiview stereoscopic images and a multiview stereoscopic display system using that

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20121128

Termination date: 20200520