EP3309750B1 - Image processing apparatus and image processing method - Google Patents
Image processing apparatus and image processing method Download PDFInfo
- Publication number
- EP3309750B1 EP3309750B1 EP17195945.5A EP17195945A EP3309750B1 EP 3309750 B1 EP3309750 B1 EP 3309750B1 EP 17195945 A EP17195945 A EP 17195945A EP 3309750 B1 EP3309750 B1 EP 3309750B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- geometric data
- cameras
- image processing
- reliability
- processing apparatus
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims description 46
- 238000003672 processing method Methods 0.000 title claims description 7
- 238000009826 distribution Methods 0.000 claims description 7
- 230000000007 visual effect Effects 0.000 description 48
- 238000000034 method Methods 0.000 description 32
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 10
- 238000012937 correction Methods 0.000 description 9
- 239000007787 solid Substances 0.000 description 5
- 239000000470 constituent Substances 0.000 description 2
- 238000012217 deletion Methods 0.000 description 2
- 230000037430 deletion Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000004044 response Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- XUIMIQQOPSSXEZ-UHFFFAOYSA-N Silicon Chemical compound [Si] XUIMIQQOPSSXEZ-UHFFFAOYSA-N 0.000 description 1
- 238000003169 complementation method Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 229910052710 silicon Inorganic materials 0.000 description 1
- 239000010703 silicon Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/10—Geometric effects
- G06T15/20—Perspective computation
- G06T15/205—Image-based rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/344—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Definitions
- the present invention relates to image processing for generating geometric data of an object.
- the Silhouette Volume Intersection has been known as a method for quickly generating (reconstructing) a three-dimensional shape of an object by using a multi-viewpoint image obtained by a plurality of cameras having different viewpoints.
- a silhouette image representing a contour line of an object is used so that a shape of the object can be obtained quickly in a stable manner.
- a fundamental problem may occur that a three-dimensional shape approximate to the real object cannot be generated.
- Japanese Patent No. 4839237 discloses a method for generating a three-dimensional shape of a partially lacking object by complementing the lacking part of the object.
- a three-dimensional shape different from a real shape of the object may be generated.
- the shape of the object generated with the lower number of viewpoints may be complemented to be excessively inflated.
- the technology disclosed in Japanese Patent No. 4839237 may not generate a three-dimensional shape of a part corresponding to the valid cameras the number of which is matched with the number of all cameras.
- US2005/0088515A1 describes methods, systems, and apparatuses for three-dimensional (3D) imaging.
- the methods, systems, and apparatuses provide 3D surface imaging using a camera ring configuration.
- a method for acquiring a three-dimensional (3D) surface image of a 3D object includes the steps of: positioning cameras in a circular array surrounding the 3D object; calibrating the cameras in a coordinate system; acquiring two-dimensional (2D) images with the cameras; extracting silhouettes from the 2D images; and constructing a 3D model of the 3D object based on intersections of the silhouettes.
- HANSUNG KIM ET AL "A Real-Time 3D Modeling System Using Multiple Stereo Cameras for Free-Viewpoint Video Generation", 1 January 2006 (2006-01-01), IMAGE ANALYSIS AND RECOGNITION LECTURE NOTES IN COMPUTER SCIENCE ; describes a real-time 3D modeling system using multiple stereo cameras.
- the method displays a target object from an arbitrary view by using a shape-recovery algorithm which partitions space into an octree and processes it hierarchically to derive a 3D scene description from 2D images.
- an unnecessary part of geometric data (data representing a three-dimensional shape of an object) is deleted based on the number of valid cameras.
- Fig. 1 illustrates a layout example of cameras included in a camera group.
- An example will be described in which ten cameras are placed in a stadium where rugby is played.
- a player as an object 120 exists on a field 130 where a game is played, and ten cameras 101 to 110 are arranged to surround the field 130.
- Each of the cameras in the camera group is set to have a camera orientation, a focal length, and exposure control parameters appropriate for catching, within an angle of view, the whole field 130 or a region of interest within the field 130.
- Illustrating a stadium in Fig. 1 the technology according to this embodiment is applicable to any arbitrary scene in which cameras are arranged to surround an object to generate (hereinafter, reconstruct) geometric data of the object.
- Fig. 2 illustrates an example of a configuration of a multi-viewpoint video system according to this embodiment.
- the multi-viewpoint video system illustrated in Fig. 2 includes an image processing apparatus 200 and a camera group 209.
- the camera group 209 corresponds to the cameras 101 to 110 in Fig. 1 .
- the image processing apparatus 200 includes a CPU 201, a main memory 202, a storage unit 203, an input unit 204, a display unit 205, and an external I/F unit 206 which are connected through a bus 207.
- the CPU 201 is a processing unit configured to generally control the image processing apparatus 200 and is configured to execute programs stored in the storage unit 203, for example, for performing various processes.
- the main memory 202 may temporarily store data and parameters to be used for performing processes and provides a work area to the CPU 201.
- the storage unit 203 may be a largecapacity storage device configured to store programs and data usable for GUI display and may be a nonvolatile memory such as a hard disk and a silicon disk.
- the input unit 204 is a device such as a keyboard, a mouse, an electronic pen, and a touch panel, and is configured to receive an operation input from a user.
- the display unit 205 may be a liquid crystal panel and is configured to display a GUI relating to reconstruction of geometric data.
- the external I/F unit 206 is connected to the camera group 209 over a LAN 208 and is configured to transmit and receive video data and control signal data.
- the bus 207 connects these units and is configured to perform data transfer.
- the camera group 209 is connected to the image processing apparatus 200 over the LAN 208 and is configured to start and stop an image capturing, change a camera setting (such as a shutter speed and an aperture), and transfer captured video data based on a control signal from the image processing apparatus 200.
- a camera setting such as a shutter speed and an aperture
- the system configuration may include various constituent elements other than the aforementioned units, but such constituent elements will not be described because they are not focused in this embodiment.
- Fig. 3 illustrates an example of a GUI display screen usable for reconstructing geometric data according to this embodiment.
- Fig. 3 illustrates a basic display screen of the GUI display screen and includes a reconstructed shape display region 310, a valid camera display/setting region 320, and an operation button region 340.
- the reconstructed shape display region 310 includes a pointer 311 and a slider bar 312 and can display geometric data of an object (or data representing a three-dimensional shape of an object) from an arbitrary viewpoint and at an arbitrary magnification.
- the slider bar 312 may be dragged to move.
- the viewpoint for geometric data may be changed in response to an operation performed on geometric data through the input unit 204, such as dragging the data by using a mouse and pressing an arrow key.
- the pointer 311 may be superimposed on geometric data so that one point on the geometric data can be designated.
- the valid camera display/setting region 320 includes camera icons 321 to 330 schematically indicating cameras and a field general form 331 schematically representing the field 130.
- the number and layout of camera icons are matched with the number and layout of actually placed cameras. Illustrating ten cameras corresponding to Fig. 1 as an example, the number and layout of camera icons may be different from those of Fig. 3 if the number and layout of cameras are different. Referring to Fig. 3 , the camera 101 and the camera 102 correspond to the camera icon 321 and the camera icon 322, respectively.
- the camera icons are displayed as having one of two states of ON/OFF states, whose meaning will be described below. One of the two states may be designated by a user by operating the input unit 204.
- the operation button region 340 includes a multi-viewpoint video data read button 341, a camera parameter setting button 342, a shape reconstruct button 343, a valid camera display button 344, and a valid camera setting button 345.
- a window is displayed for designating multi-viewpoint video data to be used for generating geometric data.
- the camera parameter setting button 342 is pressed, a window is displayed for obtaining a camera parameter such as an intrinsic parameter or an extrinsic parameter of a camera.
- a camera parameter here may be set by reading a file storing numerical values or may be set based on a value input by a user on the displayed window.
- the term "intrinsic parameter” refers to a coordinate value at a center of an image or a focal length of a lens in a camera
- the term “extrinsic parameter” refers to a parameter representing a position or an orientation of a camera.
- a GUI display screen as described above may be used by two different methods.
- a first method displays generated geometric data on the reconstructed shape display region 310 and then displays valid cameras corresponding to a specific position of the geometric data on the valid camera display/setting region 320.
- a user may press the valid camera display button 344 to select the first method.
- the term "valid camera” refers to a camera capturing one point (such as one point designated by a user) on geometric data.
- Geometric data may be represented by various forms such as a set of voxels, a point group, and a mesh (polygon).
- One point on geometric data may be represented as a voxel or three-dimensional coordinate values. The following descriptions handle a voxel as a minimum unit of geometric data.
- a second method designates an arbitrary valid camera to have an ON state on the valid camera display/setting region 320 so that a partial region of the geometric data viewable from the valid camera can be displayed on the reconstructed shape display region 310.
- a user may press the valid camera setting button 345 to select the second method.
- a partial region of geometric data may be displayed alone or may be superimposed on another part of the geometric data for comparison.
- a user can interactively check which camera is valid for reconstructing a shape of a region of interest.
- a user can interactively check a part corresponding to geometric data which can be estimated from a specific camera (visual field).
- a user may also interactively verify how a specific region of interest is estimated from a specific camera (visual field).
- a user can adjust a parameter for generating geometric data by checking through a GUI a relationship between a valid camera and geometric data to be generated by the camera.
- Parameters usable for generating geometric data may include a minimum number (threshold value) of valid cameras.
- a relationship between the Silhouette Volume Intersection being a scheme for generating geometric data and threshold values will be described below.
- FIG. 4A illustrates a principle of VIM.
- C1 and C2 are camera centers
- P1 and P2 are image planes of cameras
- V is a voxel
- R1 and R2 are rays from V to C1 and C2
- A1 and A2 are intersection points (projected Vs) of R1 and R2 and the image planes.
- Fig. 4B is a schematic diagram illustrating a silhouette image obtained from two cameras.
- one point A1 within a silhouette in a silhouette image obtained from the base camera C1 is selected, and the point A1 is projected into a three-dimensional space based on the camera parameters and an arbitrary depth value.
- One point projected into the three-dimensional space corresponds to one voxel V.
- a point A2 obtained by projecting the voxel V to an image plane P2 of another camera (reference camera) positions within the silhouette in the silhouette image obtained from the reference camera is determined. As illustrated in Fig. 4B , if the point A2 positions within the silhouette, the voxel V is kept. If it positions outside the silhouette on the other hand, the voxel V is deleted.
- a series of these processes may be repeated by changing the coordinates of the point A1, the depth value, the base camera, and the reference camera so that a set (Visual Hull) of connected voxels having a convex shape can be formed.
- the principle of shape reconstruction according to VIM has been described up to this point.
- Fig. 5A is a schematic diagram of a result of a shape reconstruction according to the Silhouette Volume Intersection for an object with two valid cameras.
- C1 and C2 are camera centers
- P1 and P2 are image planes of the cameras
- R1 and R2 are rays passing through a silhouette outline of the object
- OB is a section of a real object
- VH is a Visual Hull obtained by projecting the silhouette on the image planes P1 and P2.
- the VH has a shape elongating in the vertical direction of Fig.
- FIG. 5B is a schematic diagram illustrating a result of a shape reconstruction according to the Silhouette Volume Intersection for the same object with four valid cameras.
- C1 to C4 are camera centers
- P1 to P4 are image planes of the cameras
- R1 to R4 are rays passing through a silhouette outline of the object
- OB is a section of a real object
- VH is a Visual Hull obtained by projecting the silhouettes on the planes P1 to P4.
- the approximation of the shape of VH to OB increases or the accuracy increases.
- Visual Hulls with the number of valid cameras equal to or higher than a predetermined threshold value are selectively kept from a plurality of reconstructed Visual Hulls so that highly reliable geometric data can be acquired.
- a Visual Hull with the number of valid cameras lower than the predetermined threshold value may be deleted, or an approximation model may be applied thereto.
- the approximation model may be obtained by learning in advance from geometric data of an object or a similar target or may be a relatively simple shape represented by a function.
- the approximation model may be described by a two-dimensional or three-dimensional function.
- Adopting a high threshold value may increase the accuracy of the resulting geometric data while many Visual Hulls are to be deleted, for example.
- adopting a low threshold value can keep many Visual Hulls while a less accurate part may occur.
- the threshold value can be changed in accordance with a given scene according to this embodiment.
- the comparison processing between the number of valid cameras and the threshold value may be performed in units of a single voxel or a plurality of voxels instead of in units of a Visual Hull. For example, the number of valid cameras may be determined for each voxel, and a voxel with the number of valid cameras lower than the threshold value may only be deleted.
- a user may adjust the threshold value and check a result of generation of geometric data based on the relationship between the number of valid cameras and accuracy as described above.
- the geometric data displayed on the reconstructed shape display region 310 of the GUI display screen is checked with reference to a predetermined threshold value. If there is a part not accurate enough, a user may check the number of valid cameras of a highly accurately generated part on the valid camera display/setting region 320 and may reset the threshold value based on it to generate geometric data again. This can result in improved accuracy of geometric data of the target part.
- the initial value of the threshold value may be set based on the number of cameras, the layout relationship among the cameras, the size of the stadium, and the shape of the stadium, for example. For example, when fewer cameras are placed in the stadium, a lower threshold value may be set so that a Visual Hull may not be deleted easily. When many cameras are placed in the stadium on the other hand, a higher threshold value may be set so that highly accurate geometric data can be generated.
- the image processing apparatus 200 includes an image data obtaining unit 601, a distortion correcting unit 602, a silhouette generating unit 603, a Visual Hull generating unit 604.
- the image processing apparatus 200 further includes a reliability determining unit 605, a Visual Hull processing unit 606, a camera parameter obtaining unit 607, and a shape reconstruction parameter obtaining unit 608.
- An example according to this embodiment will mainly be described in which functions corresponding to the blocks illustrated in Fig. 6 are implemented by the CPU 201 in the image processing apparatus 200. However, a part or all of the functions illustrated in Fig.
- the functions of the distortion correcting unit 602, the silhouette generating unit 603, and the Visual Hull generating unit 604 in Fig. 6 may be implemented by dedicated hardware while the other functions may be implemented by the CPU 201. A flow of processing to be performed by these components will be described below.
- step S701 the image data obtaining unit 601 obtain multi-viewpoint image data through the external I/F unit 206.
- the camera parameter obtaining unit 607 obtains camera parameters such as an intrinsic parameter, an extrinsic parameter, and a distortion parameter of the camera.
- intrinsic parameter refers to coordinate values at an image center or a focal length of a lens in a camera
- extrinsic parameter refers to a parameter indicating a position or an orientation of the camera. While an extrinsic parameter is described here with a position vector of a camera at world coordinates and a rotation matrix, it may be described according to any other scheme.
- a distortion parameter represents a distortion degree of a lens in a camera.
- a camera parameter may be estimated by "structure from motion" based on multi-viewpoint image data or may be calculated by a calibration performed in advance by using a chart.
- step S703 the distortion correcting unit 602 performs a distortion correction on the multi-viewpoint image data based on a distortion parameter of the camera.
- step S704 the silhouette generating unit 603 generates a silhouette image from the multi-viewpoint image data in which the distortion is corrected.
- the shape reconstruction parameter obtaining unit 608 obtains a shape reconstruction parameter.
- the shape reconstruction parameter may be set by a user through the input unit 204 every time or may be prestored and be read out from the storage unit 203.
- step S706 the Visual Hull generating unit 604 generates a Visual Hull by using the camera parameters and the silhouette image. In order to perform this, the Silhouette Volume Intersection may be used. In other words, the Visual Hull generating unit 604 in step S706 generates geometric data (data representing a three-dimensional shape of the object).
- the reliability determining unit 605 determines a reliability in units of a single voxel or a plurality of voxels or for each Visual Hull. In other words, the reliability determining unit 605 determines a reliability of the geometric data (data representing a three-dimensional shape of the object) generated by the Visual Hull generating unit 604 in step S706. The reliability may be based on the number of valid cameras. When the reliability is based on the number of valid cameras, the reliability determining unit 605 identifies a voxel or a Visual Hull with the number of valid cameras lower than a predetermined threshold value as a voxel or a Visual Hull with low reliability.
- step S708 the Visual Hull processing unit 606 performs correction processing on the voxel or Visual Hull identified in step S707.
- the correction processing may include deleting and model application.
- the Visual Hull processing unit 606 corrects the geometric data (data representing a three-dimensional shape of the object) generated by the Visual Hull generating unit 604 in step S706.
- the image processing apparatus 200 corrects the geometric data (data representing a three-dimensional shape of the object) generated by the Silhouette Volume Intersection.
- This configuration can prevent excessive inflation of a part of an object with a lower number of valid cameras, for example.
- a three-dimensional shape approximate to a real shape of an object can be generated.
- geometric data generated by a series of the aforementioned processes are displayed on the GUI display screen.
- a user can interactively search an optimum threshold value through the GUI.
- the threshold value may be a fixed value.
- SCM Space Carving Method
- VIM is suitable for parallel processing while SCM consumes a less space of memory. Thus, one of them suitable for a given apparatus configuration may be used.
- a "valid camera for the voxel of interest” is defined as a "camera capturing an image of the voxel of interest”.
- Fig. 8A is a schematic diagram of a result of generation of geometric data based on the Silhouette Volume Intersection of an object with two valid cameras
- Fig. 8B is a schematic diagram in a case where the number of valid cameras is equal to four and the valid cameras are distributed partially unevenly.
- C1 to C4 are camera centers
- R1 to R4 are rays passing through a silhouette outline of an object
- OB is a real section of an object
- VH is a Visual Hull.
- the image planes P1 and P2 are not illustrated for convenience of illustration.
- Fig. 8A illustrates a lower number of valid cameras, like the first embodiment, while Fig. 8B illustrates an equal number of valid cameras to that in Fig. 5B though valid cameras are distributed unevenly. Therefore, the Visual Hull is deviated from the real shape.
- one or both of the number of valid cameras and the distribution of the valid cameras may be used as an index or indices for determining a reliability for a voxel.
- a maximum value of the angle made by optical axes of two valid cameras may be used as a value indicating a distribution of valid cameras.
- a physical distance of a valid camera for example, may be used as another example of the value indicating a distribution of valid cameras. Any other value may be used if it can indicate such a distribution.
- an average value of reliabilities of voxels belonging to the applicable Visual Hulls instead of such an average value, a maximum value, a minimum value or the median of the reliability may be used.
- a silhouette of a specific object in an image may be identified, or Visual Hulls may be clustered in a space.
- the position of an object may be identified by using information regarding a zenith camera, and a set of voxels present within a predetermined distance around the position may be identified as Visual Hulls of the object.
- the reliability of all of Visual Hulls may be determined by any other method. A reliability in consideration of the number of valid cameras and a weighted average of a valid camera distribution may be adopted.
- a method for correcting a shape will be described in detail.
- An example will be described in which the reliability of all Visual Hulls representing a specific object is used to perform processing. If it is determined that the reliability of Visual Hulls is lower than a threshold value, three patterns may mainly be considered including deleting all of the Visual Hulls, deleting a part of the Visual Hulls, or replacing the Visual Hulls by a different approximation model. Because the case where all of Visual Hulls are to be deleted is clear, the other two patterns will be described with reference to Figs. 9A and 9B . Though an example in which an object is a human will be described below, the same processing may be performed for an object not being a human.
- Fig. 9A is a conceptual diagram in a case where a part of a Visual Hull is deleted.
- Fig. 9A illustrates a top view of a cylinder 901 having a diameter and a height which can contain one person. It is assumed here that the diameter and height of the cylinder are equal to two meters, for example.
- a solid line 902 indicates a common part (or common region) between a Visual Hull and the cylinder 901. The part may be used as a shape to delete an unnecessarily extending part.
- Fig. 9A illustrates a center position 903 of a bottom plane of the cylinder, and the center position 903 is matched with the barycentric position of the figure obtained by projecting Visual Hulls to the field.
- the cylinder may be placed at a position other than such a barycentric position if there is a higher possibility that an object exists there.
- any other solid than a cylinder may be used such as a sphere or a rectangular parallelepiped.
- voxels may be projected to an image, and a photo-consistency may be calculated. If it is equal to or lower than a threshold value, the voxels may be deleted. In this case, a reliability for each voxel may be used to perform the deletion.
- Fig. 9B is a conceptual diagram in a case where a Visual Hull is replaced by a different approximation model.
- Fig. 9B illustrates a general human figure model 904, a plate 905 called a billboard, and any other simple solid 906. If a billboard is adopted, the height of the billboard is approximately equal to the height of a human. If a simple solid is adopted, it size may be determined in the same manner as illustrated in Fig. 9A . The layout location of the solid may be determined in the same manner as in Fig. 9A .
- a user may use a GUI to correct a Visual Hull with a reliability lower than a threshold value according to the first to fourth embodiments.
- a GUI to correct a Visual Hull with a reliability lower than a threshold value.
- the processing is automatically performed.
- Fig. 10 illustrates an example of a GUI display screen according to a fifth embodiment.
- Fig. 10 illustrates a basic display screen of the GUI display screen and includes a display region 1000, a correction method display region 1001, a slide bar 1004, an OK button 1005, and a Cancel button 1006.
- the correction method display region 1001 includes a radio button 1002 and a correction method 1003.
- a user may use the radio button 1002 to select a scheme for correcting a shape.
- the slider bar 1004 may be dragged to move so that a threshold value for reliability can be adjusted.
- the OK button 1004 is pressed, a preset reliability threshold value and correction method are used to execute correction processing on geometric data in all image frames.
- a GUI for selecting a target frame may be displayed so that correction processing can be executed on geometric data in an image frame selected by a user.
- the Cancel button 1005 is pressed, the display region 1000 is closed.
- a three-dimensional shape having a shape more approximate to the shape of a real object can be generated.
- Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a 'non-transitory computer-readable storage medium') to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
- computer executable instructions e.g., one or more programs
- a storage medium which may also be referred to more fully as
- the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
- the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
- the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD) TM ), a flash memory device, a memory card, and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Computing Systems (AREA)
- Computer Graphics (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Generation (AREA)
Description
- The present invention relates to image processing for generating geometric data of an object.
- The Silhouette Volume Intersection has been known as a method for quickly generating (reconstructing) a three-dimensional shape of an object by using a multi-viewpoint image obtained by a plurality of cameras having different viewpoints.
- According to the Silhouette Volume Intersection, a silhouette image representing a contour line of an object is used so that a shape of the object can be obtained quickly in a stable manner. On the other hand, in a case where an object having a shield or a defect, for example, cannot be observed completely, a fundamental problem may occur that a three-dimensional shape approximate to the real object cannot be generated. Against the fundamental problem,
Japanese Patent No. 4839237 - However, according to some complementation method therefor, a three-dimensional shape different from a real shape of the object may be generated. For example, according to the Silhouette Volume Intersection, in a case where a lower number of viewpoints (valid cameras) capture an object or in a case where the valid cameras are distributed unevenly, the shape of the object generated with the lower number of viewpoints may be complemented to be excessively inflated. The technology disclosed in
Japanese Patent No. 4839237 -
US2005/0088515A1 describes methods, systems, and apparatuses for three-dimensional (3D) imaging. The methods, systems, and apparatuses provide 3D surface imaging using a camera ring configuration. According to one embodiment therein, a method for acquiring a three-dimensional (3D) surface image of a 3D object is provided. The method includes the steps of: positioning cameras in a circular array surrounding the 3D object; calibrating the cameras in a coordinate system; acquiring two-dimensional (2D) images with the cameras; extracting silhouettes from the 2D images; and constructing a 3D model of the 3D object based on intersections of the silhouettes. - HANSUNG KIM ET AL: "A Real-Time 3D Modeling System Using Multiple Stereo Cameras for Free-Viewpoint Video Generation", 1 January 2006 (2006-01-01), IMAGE ANALYSIS AND RECOGNITION LECTURE NOTES IN COMPUTER SCIENCE; describes a real-time 3D modeling system using multiple stereo cameras. The method displays a target object from an arbitrary view by using a shape-recovery algorithm which partitions space into an octree and processes it hierarchically to derive a 3D scene description from 2D images.
- In one aspect of the invention there is provided an image processing apparatus according to claims 1 to 10. According to another aspect of the invention there is provided an image processing method according to claims 11 and 12.
- According to other aspects of the invention there is provided a computer program or computer-readable storage medium according to claims 13 and 14 respectively.
- Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
-
-
Fig. 1 illustrates a camera layout according to first embodiment. -
Fig. 2 illustrates a configuration of a multi-viewpoint video system according to the first embodiment. -
Fig. 3 illustrates a GUI according to the first embodiment. -
Figs. 4A and 4B are conceptual diagrams illustrating a Silhouette Volume Intersection. -
Figs. 5A and 5B are conceptual diagrams illustrating a relationship between viewpoints and a Visual Hull. -
Fig. 6 is a block diagram illustrating a configuration of an image processing apparatus according to the first embodiment. -
Fig. 7 is a flowchart illustrating a flow of processing to be performed by the image processing apparatus according to the first embodiment. -
Figs. 8A and 8B illustrate distributions of valid cameras. -
Figs. 9A and 9B illustrate a method for correcting a shape. -
Fig. 10 illustrates a GUI according to a fifth embodiment. - Embodiments of the present invention will be described with reference to drawings. It is not intended that the following embodiments limit the present invention and that all of combinations of features according to the embodiments are necessary for the present invention. Like numbers refer to like parts throughout. First Embodiment
- According to a first embodiment, in response to a user's operation through a GUI (graphical user interface), an unnecessary part of geometric data (data representing a three-dimensional shape of an object) is deleted based on the number of valid cameras.
-
Fig. 1 illustrates a layout example of cameras included in a camera group. An example will be described in which ten cameras are placed in a stadium where Rugby is played. A player as anobject 120 exists on afield 130 where a game is played, and tencameras 101 to 110 are arranged to surround thefield 130. Each of the cameras in the camera group is set to have a camera orientation, a focal length, and exposure control parameters appropriate for catching, within an angle of view, thewhole field 130 or a region of interest within thefield 130. Illustrating a stadium inFig. 1 , the technology according to this embodiment is applicable to any arbitrary scene in which cameras are arranged to surround an object to generate (hereinafter, reconstruct) geometric data of the object. -
Fig. 2 illustrates an example of a configuration of a multi-viewpoint video system according to this embodiment. The multi-viewpoint video system illustrated inFig. 2 includes animage processing apparatus 200 and acamera group 209. Thecamera group 209 corresponds to thecameras 101 to 110 inFig. 1 . Theimage processing apparatus 200 includes aCPU 201, amain memory 202, astorage unit 203, aninput unit 204, adisplay unit 205, and an external I/F unit 206 which are connected through abus 207. TheCPU 201 is a processing unit configured to generally control theimage processing apparatus 200 and is configured to execute programs stored in thestorage unit 203, for example, for performing various processes. Themain memory 202 may temporarily store data and parameters to be used for performing processes and provides a work area to theCPU 201. Thestorage unit 203 may be a largecapacity storage device configured to store programs and data usable for GUI display and may be a nonvolatile memory such as a hard disk and a silicon disk. Theinput unit 204 is a device such as a keyboard, a mouse, an electronic pen, and a touch panel, and is configured to receive an operation input from a user. Thedisplay unit 205 may be a liquid crystal panel and is configured to display a GUI relating to reconstruction of geometric data. The external I/F unit 206 is connected to thecamera group 209 over aLAN 208 and is configured to transmit and receive video data and control signal data. Thebus 207 connects these units and is configured to perform data transfer. - The
camera group 209 is connected to theimage processing apparatus 200 over theLAN 208 and is configured to start and stop an image capturing, change a camera setting (such as a shutter speed and an aperture), and transfer captured video data based on a control signal from theimage processing apparatus 200. - The system configuration may include various constituent elements other than the aforementioned units, but such constituent elements will not be described because they are not focused in this embodiment.
-
Fig. 3 illustrates an example of a GUI display screen usable for reconstructing geometric data according to this embodiment.Fig. 3 illustrates a basic display screen of the GUI display screen and includes a reconstructedshape display region 310, a valid camera display/setting region 320, and anoperation button region 340. - The reconstructed
shape display region 310 includes apointer 311 and aslider bar 312 and can display geometric data of an object (or data representing a three-dimensional shape of an object) from an arbitrary viewpoint and at an arbitrary magnification. In order to change the display magnification for geometric data, theslider bar 312 may be dragged to move. The viewpoint for geometric data may be changed in response to an operation performed on geometric data through theinput unit 204, such as dragging the data by using a mouse and pressing an arrow key. Thepointer 311 may be superimposed on geometric data so that one point on the geometric data can be designated. - The valid camera display/
setting region 320 includescamera icons 321 to 330 schematically indicating cameras and a fieldgeneral form 331 schematically representing thefield 130. The number and layout of camera icons are matched with the number and layout of actually placed cameras. Illustrating ten cameras corresponding toFig. 1 as an example, the number and layout of camera icons may be different from those ofFig. 3 if the number and layout of cameras are different. Referring toFig. 3 , thecamera 101 and thecamera 102 correspond to thecamera icon 321 and thecamera icon 322, respectively. The camera icons are displayed as having one of two states of ON/OFF states, whose meaning will be described below. One of the two states may be designated by a user by operating theinput unit 204. - The
operation button region 340 includes a multi-viewpoint video data readbutton 341, a cameraparameter setting button 342, a shape reconstructbutton 343, a validcamera display button 344, and a validcamera setting button 345. When the multi-viewpoint video data readbutton 341 is pressed, a window is displayed for designating multi-viewpoint video data to be used for generating geometric data. When the cameraparameter setting button 342 is pressed, a window is displayed for obtaining a camera parameter such as an intrinsic parameter or an extrinsic parameter of a camera. A camera parameter here may be set by reading a file storing numerical values or may be set based on a value input by a user on the displayed window. Here, the term "intrinsic parameter" refers to a coordinate value at a center of an image or a focal length of a lens in a camera, and the term "extrinsic parameter" refers to a parameter representing a position or an orientation of a camera. When the shape reconstructbutton 343 is pressed, a window opens for setting a parameter relating to shape reconstruction, and geometric data are generated by using multi-viewpoint video data as described above. - A GUI display screen as described above may be used by two different methods. A first method displays generated geometric data on the reconstructed
shape display region 310 and then displays valid cameras corresponding to a specific position of the geometric data on the valid camera display/setting region 320. A user may press the validcamera display button 344 to select the first method. Here, the term "valid camera" refers to a camera capturing one point (such as one point designated by a user) on geometric data. Geometric data may be represented by various forms such as a set of voxels, a point group, and a mesh (polygon). One point on geometric data may be represented as a voxel or three-dimensional coordinate values. The following descriptions handle a voxel as a minimum unit of geometric data. When a user designates an arbitrary one point on geometric data by using thepointer 311 in the reconstructedshape display region 310, a part representing valid cameras is displayed as having an ON state on the valid camera display/setting region 320. Thus, a user can interactively check valid cameras at arbitrary positions on geometric data. - A second method designates an arbitrary valid camera to have an ON state on the valid camera display/
setting region 320 so that a partial region of the geometric data viewable from the valid camera can be displayed on the reconstructedshape display region 310. A user may press the validcamera setting button 345 to select the second method. A partial region of geometric data may be displayed alone or may be superimposed on another part of the geometric data for comparison. Thus, a user can interactively check which camera is valid for reconstructing a shape of a region of interest. A user can interactively check a part corresponding to geometric data which can be estimated from a specific camera (visual field). A user may also interactively verify how a specific region of interest is estimated from a specific camera (visual field). - Thus, a user can adjust a parameter for generating geometric data by checking through a GUI a relationship between a valid camera and geometric data to be generated by the camera. Parameters usable for generating geometric data may include a minimum number (threshold value) of valid cameras. A relationship between the Silhouette Volume Intersection being a scheme for generating geometric data and threshold values will be described below.
- Because the fundamental principle of the Silhouette Volume Intersection is disclosed in
Japanese patent No. 4839237 Fig. 4A illustrates a principle of VIM. Referring toFig. 4A , C1 and C2 are camera centers, P1 and P2 are image planes of cameras, V is a voxel, R1 and R2 are rays from V to C1 and C2, A1 and A2 are intersection points (projected Vs) of R1 and R2 and the image planes.Fig. 4B is a schematic diagram illustrating a silhouette image obtained from two cameras. According to VIM, one point A1 within a silhouette in a silhouette image obtained from the base camera C1 is selected, and the point A1 is projected into a three-dimensional space based on the camera parameters and an arbitrary depth value. One point projected into the three-dimensional space corresponds to one voxel V. Next, whether a point A2 obtained by projecting the voxel V to an image plane P2 of another camera (reference camera) positions within the silhouette in the silhouette image obtained from the reference camera is determined. As illustrated inFig. 4B , if the point A2 positions within the silhouette, the voxel V is kept. If it positions outside the silhouette on the other hand, the voxel V is deleted. A series of these processes may be repeated by changing the coordinates of the point A1, the depth value, the base camera, and the reference camera so that a set (Visual Hull) of connected voxels having a convex shape can be formed. The principle of shape reconstruction according to VIM has been described up to this point. - Because the Silhouette Volume Intersection is based on data on a silhouette having a reduced amount of information, geometric data can be generated robustly and quickly while the accuracy decreases when the number of valid cameras is low.
Fig. 5A is a schematic diagram of a result of a shape reconstruction according to the Silhouette Volume Intersection for an object with two valid cameras. Referring toFig. 5A , C1 and C2 are camera centers, P1 and P2 are image planes of the cameras, R1 and R2 are rays passing through a silhouette outline of the object, OB is a section of a real object, and VH is a Visual Hull obtained by projecting the silhouette on the image planes P1 and P2. The VH has a shape elongating in the vertical direction ofFig. 5A and is deviated from the real object. On the other hand,Fig. 5B is a schematic diagram illustrating a result of a shape reconstruction according to the Silhouette Volume Intersection for the same object with four valid cameras. Referring toFig. 5B , C1 to C4 are camera centers, P1 to P4 are image planes of the cameras, R1 to R4 are rays passing through a silhouette outline of the object, OB is a section of a real object, VH is a Visual Hull obtained by projecting the silhouettes on the planes P1 to P4. Referring toFig. 5B , as the number of valid cameras increases, the approximation of the shape of VH to OB increases or the accuracy increases. As the number of valid cameras increases, the reliability of the reconstructed Visual Hull increases. According to this embodiment based on the characteristics, Visual Hulls with the number of valid cameras equal to or higher than a predetermined threshold value are selectively kept from a plurality of reconstructed Visual Hulls so that highly reliable geometric data can be acquired. A Visual Hull with the number of valid cameras lower than the predetermined threshold value may be deleted, or an approximation model may be applied thereto. The approximation model may be obtained by learning in advance from geometric data of an object or a similar target or may be a relatively simple shape represented by a function. The approximation model may be described by a two-dimensional or three-dimensional function. Adopting a high threshold value may increase the accuracy of the resulting geometric data while many Visual Hulls are to be deleted, for example. On the other hand, adopting a low threshold value can keep many Visual Hulls while a less accurate part may occur. Because of existence of such a trade-off relationship, the threshold value can be changed in accordance with a given scene according to this embodiment. The comparison processing between the number of valid cameras and the threshold value may be performed in units of a single voxel or a plurality of voxels instead of in units of a Visual Hull. For example, the number of valid cameras may be determined for each voxel, and a voxel with the number of valid cameras lower than the threshold value may only be deleted. - Returning to the descriptions on operations to be performed on the GUI display screen, a user may adjust the threshold value and check a result of generation of geometric data based on the relationship between the number of valid cameras and accuracy as described above. According to the first method, the geometric data displayed on the reconstructed
shape display region 310 of the GUI display screen is checked with reference to a predetermined threshold value. If there is a part not accurate enough, a user may check the number of valid cameras of a highly accurately generated part on the valid camera display/setting region 320 and may reset the threshold value based on it to generate geometric data again. This can result in improved accuracy of geometric data of the target part. The initial value of the threshold value may be set based on the number of cameras, the layout relationship among the cameras, the size of the stadium, and the shape of the stadium, for example. For example, when fewer cameras are placed in the stadium, a lower threshold value may be set so that a Visual Hull may not be deleted easily. When many cameras are placed in the stadium on the other hand, a higher threshold value may be set so that highly accurate geometric data can be generated. - Processing to be performed in the
image processing apparatus 200 according to the first embodiment will be described with reference to a functional block diagram illustrated inFig. 6 and a flowchart illustrated inFig. 7 . Theimage processing apparatus 200 includes an imagedata obtaining unit 601, adistortion correcting unit 602, asilhouette generating unit 603, a VisualHull generating unit 604. Theimage processing apparatus 200 further includes areliability determining unit 605, a VisualHull processing unit 606, a cameraparameter obtaining unit 607, and a shape reconstructionparameter obtaining unit 608. An example according to this embodiment will mainly be described in which functions corresponding to the blocks illustrated inFig. 6 are implemented by theCPU 201 in theimage processing apparatus 200. However, a part or all of the functions illustrated inFig. 6 may be executed by dedicated hardware. For example, the functions of thedistortion correcting unit 602, thesilhouette generating unit 603, and the VisualHull generating unit 604 inFig. 6 may be implemented by dedicated hardware while the other functions may be implemented by theCPU 201. A flow of processing to be performed by these components will be described below. - In step S701, the image
data obtaining unit 601 obtain multi-viewpoint image data through the external I/F unit 206. - In step S702, the camera
parameter obtaining unit 607 obtains camera parameters such as an intrinsic parameter, an extrinsic parameter, and a distortion parameter of the camera. Here, the term "intrinsic parameter" refers to coordinate values at an image center or a focal length of a lens in a camera, and the term "extrinsic parameter" refers to a parameter indicating a position or an orientation of the camera. While an extrinsic parameter is described here with a position vector of a camera at world coordinates and a rotation matrix, it may be described according to any other scheme. A distortion parameter represents a distortion degree of a lens in a camera. A camera parameter may be estimated by "structure from motion" based on multi-viewpoint image data or may be calculated by a calibration performed in advance by using a chart. - In step S703, the
distortion correcting unit 602 performs a distortion correction on the multi-viewpoint image data based on a distortion parameter of the camera. - In step S704, the
silhouette generating unit 603 generates a silhouette image from the multi-viewpoint image data in which the distortion is corrected. The term "silhouette image" refers to a binary image having a region with an object represented in white (pixel value = 255) and a region without the object represented in black (pixel value = 0). Silhouette image data are generated by performing an existing scheme such as background separation and object cut-out on multi-viewpoint image data. - In step S705, the shape reconstruction
parameter obtaining unit 608 obtains a shape reconstruction parameter. The shape reconstruction parameter may be set by a user through theinput unit 204 every time or may be prestored and be read out from thestorage unit 203. - In step S706, the Visual
Hull generating unit 604 generates a Visual Hull by using the camera parameters and the silhouette image. In order to perform this, the Silhouette Volume Intersection may be used. In other words, the VisualHull generating unit 604 in step S706 generates geometric data (data representing a three-dimensional shape of the object). - In step S707, the
reliability determining unit 605 determines a reliability in units of a single voxel or a plurality of voxels or for each Visual Hull. In other words, thereliability determining unit 605 determines a reliability of the geometric data (data representing a three-dimensional shape of the object) generated by the VisualHull generating unit 604 in step S706. The reliability may be based on the number of valid cameras. When the reliability is based on the number of valid cameras, thereliability determining unit 605 identifies a voxel or a Visual Hull with the number of valid cameras lower than a predetermined threshold value as a voxel or a Visual Hull with low reliability. - In step S708, the Visual
Hull processing unit 606 performs correction processing on the voxel or Visual Hull identified in step S707. The correction processing may include deleting and model application. In other words, based on the reliability, the VisualHull processing unit 606 corrects the geometric data (data representing a three-dimensional shape of the object) generated by the VisualHull generating unit 604 in step S706. - As described above, based on the reliability, the
image processing apparatus 200 according to this embodiment corrects the geometric data (data representing a three-dimensional shape of the object) generated by the Silhouette Volume Intersection. This configuration can prevent excessive inflation of a part of an object with a lower number of valid cameras, for example. Thus, a three-dimensional shape approximate to a real shape of an object can be generated. According to this embodiment, geometric data generated by a series of the aforementioned processes are displayed on the GUI display screen. According to this embodiment, a user can interactively search an optimum threshold value through the GUI. However, the threshold value may be a fixed value. - Another implementation example of the Silhouette Volume Intersection according to a second embodiment will be described. Space Carving Method (SCM) is known as one implementation of the volume intersection method. SCM projects an arbitrary one voxel V to image planes of all cameras and keeps the voxel V if all of the projected points are within a silhouette corresponding to each of the cameras and deletes the voxel V if even one point is off the silhouette. This process may be performed on all voxels within a certain range so that a set of the kept voxels can form a Visual Hull. VIM is suitable for parallel processing while SCM consumes a less space of memory. Thus, one of them suitable for a given apparatus configuration may be used.
- A method for determining a reliability according to this embodiment will be described in detail. Hereinafter, the expression a "valid camera for the voxel of interest" is defined as a "camera capturing an image of the voxel of interest".
Fig. 8A is a schematic diagram of a result of generation of geometric data based on the Silhouette Volume Intersection of an object with two valid cameras, andFig. 8B is a schematic diagram in a case where the number of valid cameras is equal to four and the valid cameras are distributed partially unevenly. Referring toFig. 8A , C1 to C4 are camera centers, R1 to R4 are rays passing through a silhouette outline of an object, OB is a real section of an object, and VH is a Visual Hull. UnlikeFigs. 5A and 5B , the image planes P1 and P2 are not illustrated for convenience of illustration. -
Fig. 8A illustrates a lower number of valid cameras, like the first embodiment, whileFig. 8B illustrates an equal number of valid cameras to that inFig. 5B though valid cameras are distributed unevenly. Therefore, the Visual Hull is deviated from the real shape. - Therefore, one or both of the number of valid cameras and the distribution of the valid cameras may be used as an index or indices for determining a reliability for a voxel. A maximum value of the angle made by optical axes of two valid cameras, for example, may be used as a value indicating a distribution of valid cameras. A physical distance of a valid camera, for example, may be used as another example of the value indicating a distribution of valid cameras. Any other value may be used if it can indicate such a distribution. In order to determine a reliability of all Visual Hulls representing an object, an average value of reliabilities of voxels belonging to the applicable Visual Hulls. Instead of such an average value, a maximum value, a minimum value or the median of the reliability may be used. In order to identify Visual Hulls of an object, a silhouette of a specific object in an image may be identified, or Visual Hulls may be clustered in a space. The position of an object may be identified by using information regarding a zenith camera, and a set of voxels present within a predetermined distance around the position may be identified as Visual Hulls of the object. However, the reliability of all of Visual Hulls may be determined by any other method. A reliability in consideration of the number of valid cameras and a weighted average of a valid camera distribution may be adopted.
- According to a fourth embodiment, a method for correcting a shape will be described in detail. An example will be described in which the reliability of all Visual Hulls representing a specific object is used to perform processing. If it is determined that the reliability of Visual Hulls is lower than a threshold value, three patterns may mainly be considered including deleting all of the Visual Hulls, deleting a part of the Visual Hulls, or replacing the Visual Hulls by a different approximation model. Because the case where all of Visual Hulls are to be deleted is clear, the other two patterns will be described with reference to
Figs. 9A and 9B . Though an example in which an object is a human will be described below, the same processing may be performed for an object not being a human. -
Fig. 9A is a conceptual diagram in a case where a part of a Visual Hull is deleted.Fig. 9A illustrates a top view of acylinder 901 having a diameter and a height which can contain one person. It is assumed here that the diameter and height of the cylinder are equal to two meters, for example. Asolid line 902 indicates a common part (or common region) between a Visual Hull and thecylinder 901. The part may be used as a shape to delete an unnecessarily extending part.Fig. 9A illustrates acenter position 903 of a bottom plane of the cylinder, and thecenter position 903 is matched with the barycentric position of the figure obtained by projecting Visual Hulls to the field. The cylinder may be placed at a position other than such a barycentric position if there is a higher possibility that an object exists there. Alternatively, any other solid than a cylinder may be used such as a sphere or a rectangular parallelepiped. As an alternative deletion method without using a solid, voxels may be projected to an image, and a photo-consistency may be calculated. If it is equal to or lower than a threshold value, the voxels may be deleted. In this case, a reliability for each voxel may be used to perform the deletion. -
Fig. 9B is a conceptual diagram in a case where a Visual Hull is replaced by a different approximation model.Fig. 9B illustrates a generalhuman figure model 904, aplate 905 called a billboard, and any other simple solid 906. If a billboard is adopted, the height of the billboard is approximately equal to the height of a human. If a simple solid is adopted, it size may be determined in the same manner as illustrated inFig. 9A . The layout location of the solid may be determined in the same manner as inFig. 9A . - A user may use a GUI to correct a Visual Hull with a reliability lower than a threshold value according to the first to fourth embodiments. However, when multi-viewpoint moving-image data are input, it may not be realistic that a user corrects a shape on each of frames. According to this embodiment, the processing is automatically performed.
-
Fig. 10 illustrates an example of a GUI display screen according to a fifth embodiment.Fig. 10 illustrates a basic display screen of the GUI display screen and includes adisplay region 1000, a correctionmethod display region 1001, aslide bar 1004, anOK button 1005, and a Cancelbutton 1006. The correctionmethod display region 1001 includes aradio button 1002 and acorrection method 1003. A user may use theradio button 1002 to select a scheme for correcting a shape. Theslider bar 1004 may be dragged to move so that a threshold value for reliability can be adjusted. When theOK button 1004 is pressed, a preset reliability threshold value and correction method are used to execute correction processing on geometric data in all image frames. A GUI for selecting a target frame may be displayed so that correction processing can be executed on geometric data in an image frame selected by a user. When the Cancelbutton 1005 is pressed, thedisplay region 1000 is closed. - According to the aforementioned first to fifth embodiments, a three-dimensional shape having a shape more approximate to the shape of a real object can be generated. Other Embodiments
- Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a 'non-transitory computer-readable storage medium') to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
Claims (14)
- An image processing apparatus (200) generating geometric data of an object (120), the image processing apparatus (200) comprising:obtaining means (601) configured to obtain a plurality of images of the object (120), wherein each image is captured from a different viewpoint by a group of cameras (101-110);generating means (604) configured to generate geometric data of the object (120) based on the images obtained by the obtaining means (601); andcorrecting means (606) configured to correct the geometric data generated by the generating means (604), based on a reliability (S707) determination that determines a reliability of at least a part of the geometric data generated by the generating means (604), wherein the reliability of the at least part of the geometric data is determined based on the number of cameras that capture a region of the object (120) corresponding to the at least part of the geometric data,wherein the correcting means (606) is configured to delete the at least part of the geometric data based on the number of cameras that capture the region of the object (120) being lower than a threshold value.
- An image processing apparatus according to claim 1, wherein the reliability of the least part of the geometric data is further determined based on a positional distribution of cameras that capture the region of the object.
- The image processing apparatus according to Claim 1 or 2, wherein the processing unit is configured not to delete the at least part of the geometric data based on the number of image capturing apparatuses that capture the area corresponding to the at least a part of the geometric data being equal to or higher than the threshold value.
- The image processing apparatus according to any one of Claims 1 to 3, wherein the threshold value is based on a total number of the cameras.
- The image processing apparatus according to Claim 1, wherein the threshold value is determined based on the number of the cameras.
- The image processing apparatus according to Claim 5, wherein a threshold value in a case where the number of the cameras is a first value is larger than a threshold value in a case where the number of the cameras is a second value being smaller than the first value.
- An image processing apparatus
generating geometric data of an object (120), the image processing apparatus (200) comprising:obtaining means (601) configured to obtain a plurality of images of the object (120), wherein each image is captured from a different viewpoint by a group of cameras (101-110);generating means (604) configured to generate geometric data of the object (120) based on the images obtained by the obtaining means (601); andcorrecting means (606) configured to correct the geometric data generated by the generating means (604), based on a reliability (S707) determination that determines a reliability of at least a part of the geometric data generated by the generating means (604), wherein the reliability of the at least part of the geometric data is determined based on the number of cameras that capture a region of the object (120) corresponding to the at least part of the geometric data,wherein the correcting means (606) is configured to replace the at least part of the geometric data by an approximation model based on the number of cameras that capture the region of the object (120) being lower than a threshold value. - An image processing apparatus according to Claim 7, wherein the approximation model is a model described by a two-dimensional or three-dimensional function.
- An image processing apparatus according to Claim 7, wherein the approximation model is a model based on a shape of a human figure.
- The image processing apparatus according to Claim 7, wherein the approximation model is a model based on a height of a human figure.
- An computer-implemented image processing method for generating geometric data of an object, the image processing method comprising:obtaining a plurality of images of the object, wherein each image is captured from a different viewpoint by a group of cameras;generating geometric data of the object based on the images; andcorrecting the generated geometric data based on a reliability determination that determines a reliability of at least a part of the generated geometric data,wherein the reliability of the at least part of the geometric data is determined based on the number of cameras that capture a region of the object (120) corresponding to the at least part of the geometric data,wherein the correcting comprises deleting the at least part of the geometric data based on the number of cameras that capture the region of the object (120) being lower than a threshold value.
- A computer-implemented image processing method for generating geometric data of an object, the image processing method comprising:obtaining a plurality of images of the object (120),wherein each image is captured from a different viewpoint by a group of cameras (101-110);generating geometric data of the object (120) based on the images obtained by the obtaining means (601); andcorrecting the geometric data generated by the generating means (604), based on a reliability (S707) determination that determines a reliability of at least a part of the geometric data generated by the generating means (604), wherein the reliability of the at least part of the geometric data is determined based on the number of cameras that capture a region of the object (120) corresponding to the at least part of the geometric data, wherein the correcting comprises replacing the at least part of the geometric data by an approximation model based on the number of cameras that capture the region of the object (120) being lower than a threshold value.
- A program, that when executed by a computer, causes the computer to execute an image processing method according to claim 11 or 12.
- A computer-readable storage medium storing a program according to claim 13.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2016201256 | 2016-10-12 |
Publications (2)
Publication Number | Publication Date |
---|---|
EP3309750A1 EP3309750A1 (en) | 2018-04-18 |
EP3309750B1 true EP3309750B1 (en) | 2024-01-17 |
Family
ID=60201809
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP17195945.5A Active EP3309750B1 (en) | 2016-10-12 | 2017-10-11 | Image processing apparatus and image processing method |
Country Status (3)
Country | Link |
---|---|
US (1) | US10657703B2 (en) |
EP (1) | EP3309750B1 (en) |
JP (1) | JP7013144B2 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7195785B2 (en) * | 2018-06-29 | 2022-12-26 | キヤノン株式会社 | Apparatus, method and program for generating 3D shape data |
JP7544036B2 (en) | 2019-04-12 | 2024-09-03 | ソニーグループ株式会社 | IMAGE PROCESSING APPARATUS, 3D MODEL GENERATION METHOD, AND PROGRAM |
JP7378960B2 (en) * | 2019-05-14 | 2023-11-14 | キヤノン株式会社 | Image processing device, image processing system, image generation method, and program |
EP3846123B1 (en) * | 2019-12-31 | 2024-05-29 | Dassault Systèmes | 3d reconstruction with smooth maps |
JP7605729B2 (en) | 2021-12-23 | 2024-12-24 | Kddi株式会社 | 3D model generation device, method and program |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS4839237B1 (en) | 1969-02-17 | 1973-11-22 | ||
US20040104935A1 (en) | 2001-01-26 | 2004-06-03 | Todd Williamson | Virtual reality immersion system |
US7046840B2 (en) * | 2001-11-09 | 2006-05-16 | Arcsoft, Inc. | 3-D reconstruction engine |
US20050088515A1 (en) | 2003-10-23 | 2005-04-28 | Geng Z. J. | Camera ring for three-dimensional (3D) surface imaging |
JP4839237B2 (en) | 2007-02-07 | 2011-12-21 | 日本電信電話株式会社 | 3D shape restoration method, 3D shape restoration device, 3D shape restoration program implementing the method, and recording medium recording the program |
GB2458305B (en) | 2008-03-13 | 2012-06-27 | British Broadcasting Corp | Providing a volumetric representation of an object |
GB2458927B (en) * | 2008-04-02 | 2012-11-14 | Eykona Technologies Ltd | 3D Imaging system |
JP5068732B2 (en) * | 2008-11-17 | 2012-11-07 | 日本放送協会 | 3D shape generator |
KR101591779B1 (en) | 2009-03-17 | 2016-02-05 | 삼성전자주식회사 | Apparatus and method for generating skeleton model using motion data and image data |
WO2010133007A1 (en) * | 2009-05-21 | 2010-11-25 | Intel Corporation | Techniques for rapid stereo reconstruction from images |
JP6024658B2 (en) * | 2011-07-01 | 2016-11-16 | 日本電気株式会社 | Object detection apparatus, object detection method, and program |
JP2013137760A (en) * | 2011-12-16 | 2013-07-11 | Cognex Corp | Multi-part corresponder for plurality of cameras |
US20150178988A1 (en) | 2012-05-22 | 2015-06-25 | Telefonica, S.A. | Method and a system for generating a realistic 3d reconstruction model for an object or being |
JP6393106B2 (en) | 2014-07-24 | 2018-09-19 | キヤノン株式会社 | Image processing apparatus, image processing method, and program |
US10074214B2 (en) * | 2015-12-31 | 2018-09-11 | Autodesk, Inc. | Systems and methods for generating 3D scenes with time element for display |
US9818234B2 (en) * | 2016-03-16 | 2017-11-14 | Canon Kabushiki Kaisha | 3D shape reconstruction using reflection onto electronic light diffusing layers |
US10430922B2 (en) * | 2016-09-08 | 2019-10-01 | Carnegie Mellon University | Methods and software for generating a derived 3D object model from a single 2D image |
US10074160B2 (en) * | 2016-09-30 | 2018-09-11 | Disney Enterprises, Inc. | Point cloud noise and outlier removal for image-based 3D reconstruction |
KR20180067908A (en) * | 2016-12-13 | 2018-06-21 | 한국전자통신연구원 | Apparatus for restoring 3d-model and method for using the same |
US10403030B2 (en) * | 2017-08-28 | 2019-09-03 | Microsoft Technology Licensing, Llc | Computing volumes of interest for photogrammetric 3D reconstruction |
-
2017
- 2017-05-12 JP JP2017095914A patent/JP7013144B2/en active Active
- 2017-10-10 US US15/729,463 patent/US10657703B2/en active Active
- 2017-10-11 EP EP17195945.5A patent/EP3309750B1/en active Active
Non-Patent Citations (1)
Title |
---|
HANSUNG KIM ET AL: "A Real-Time 3D Modeling System Using Multiple Stereo Cameras for Free-Viewpoint Video Generation", 1 January 2006, IMAGE ANALYSIS AND RECOGNITION LECTURE NOTES IN COMPUTER SCIENCE;;LNCS, SPRINGER, BERLIN, DE, PAGE(S) 237 - 249, ISBN: 978-3-540-44894-5, XP019043784 * |
Also Published As
Publication number | Publication date |
---|---|
EP3309750A1 (en) | 2018-04-18 |
US20180101979A1 (en) | 2018-04-12 |
US10657703B2 (en) | 2020-05-19 |
JP7013144B2 (en) | 2022-01-31 |
JP2018063693A (en) | 2018-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3309750B1 (en) | Image processing apparatus and image processing method | |
KR102316056B1 (en) | Image processing apparatus, image processing method thereof and program | |
CN108475433B (en) | Method and system for large-scale determination of RGBD camera pose | |
CN113689578B (en) | Human body data set generation method and device | |
US11200690B2 (en) | Image processing apparatus, three-dimensional shape data generation method, and non-transitory computer readable storage medium | |
CN106033621B (en) | A kind of method and device of three-dimensional modeling | |
CN109816766A (en) | Image processing apparatus, image processing method and storage medium | |
CN103562934B (en) | Face location detection | |
KR102387891B1 (en) | Image processing apparatus, control method of image processing apparatus, and computer-readable storage medium | |
US11490062B2 (en) | Information processing apparatus, information processing method, and storage medium | |
CN109035330A (en) | Cabinet approximating method, equipment and computer readable storage medium | |
US20180330520A1 (en) | Method and system for calibrating a velocimetry system | |
US20230237777A1 (en) | Information processing apparatus, learning apparatus, image recognition apparatus, information processing method, learning method, image recognition method, and non-transitory-computer-readable storage medium | |
KR20180123302A (en) | Method and Apparatus for Visualizing a Ball Trajectory | |
CN116051736A (en) | Three-dimensional reconstruction method, device, edge equipment and storage medium | |
US11328477B2 (en) | Image processing apparatus, image processing method and storage medium | |
CN104980725B (en) | Apparatus and method for forming a three-dimensional scene | |
CN112634439B (en) | 3D information display method and device | |
CN118511053A (en) | Calculation method and calculation device | |
JP7487266B2 (en) | Image processing device, image processing method, and program | |
US11935182B2 (en) | Information processing apparatus, information processing method, and storage medium | |
US20240054747A1 (en) | Image processing apparatus, image processing method, and storage medium | |
Montenegro et al. | Space carving with a hand-held camera | |
JP2018125642A (en) | Region extraction apparatus and program | |
Jančošek | Large Scale Surface Reconstruction based on Point Visibility |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20181018 |
|
RBV | Designated contracting states (corrected) |
Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20200514 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
INTG | Intention to grant announced |
Effective date: 20230809 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602017078487 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG9D |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20240117 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1651050 Country of ref document: AT Kind code of ref document: T Effective date: 20240117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240517 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240418 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240417 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240417 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240417 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240517 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240418 Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240517 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240517 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20240919 Year of fee payment: 8 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602017078487 Country of ref document: DE |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |
|
26N | No opposition filed |
Effective date: 20241018 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20240117 |