[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20180240264A1 - Information processing apparatus and method of generating three-dimensional model - Google Patents

Information processing apparatus and method of generating three-dimensional model Download PDF

Info

Publication number
US20180240264A1
US20180240264A1 US15/891,530 US201815891530A US2018240264A1 US 20180240264 A1 US20180240264 A1 US 20180240264A1 US 201815891530 A US201815891530 A US 201815891530A US 2018240264 A1 US2018240264 A1 US 2018240264A1
Authority
US
United States
Prior art keywords
coordinate
voxel
coordinate points
processing unit
camera image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US15/891,530
Other versions
US10719975B2 (en
Inventor
Hironao Ito
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc filed Critical Canon Inc
Assigned to CANON KABUSHIKI KAISHA reassignment CANON KABUSHIKI KAISHA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ITO, HIRONAO
Publication of US20180240264A1 publication Critical patent/US20180240264A1/en
Application granted granted Critical
Publication of US10719975B2 publication Critical patent/US10719975B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/564Depth or shape recovery from multiple images from contours
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/08Indexing scheme for image data processing or generation, in general involving all processing steps from image acquisition to 3D model generation

Definitions

  • the present invention relates to an information processing apparatus that generates a three-dimensional model based on images obtained from a plurality of cameras and a method of generating a three-dimensional model.
  • the volume intersection method As a method of generating a three-dimensional model, the volume intersection method is known.
  • a measurement target space shot by a plurality of cameras is divided into small cubes or cuboids (to be referred to as voxels hereinafter).
  • each voxel is geometrically transformed and projected onto a camera image, and it is determined whether the voxel is inside the silhouette of a modeling target object in the camera image. If it is determined that the voxel is inside the silhouette in all of the camera images, the voxel is registered as a voxel forming the target object.
  • the voxels that have been registered as voxels forming the target object are output as a three-dimensional model.
  • a rotation matrix R is determined by the orientation of the camera in the world coordinate system, and a translation vector t is determined by the position of the camera in the world coordinate system.
  • the rotation matrix R is a 3 ⁇ 3 matrix.
  • the three-dimensional coordinate point (Xc, Yc, Zc) of the camera coordinate system is transformed into a two-dimensional coordinate point (u, v) on a camera image by perspective projection transformation by
  • an information processing apparatus that generates a three-dimensional model based on a plurality of camera images obtained using a plurality of cameras, comprising: a transformation unit configured to transform, into two-dimensional coordinate points on a camera image, a plurality of representative coordinate points specified from one processing unit voxel of a plurality of processing unit voxels that are obtained by dividing a target three-dimensional space serving as a target of three-dimensional model generation; a determination unit configured to determine, by using transformation results of the plurality of representative coordinate points by the transformation unit, a coordinate point on the camera image corresponding to an internal coordinate point of the one processing unit voxel; and a generation unit configured to generate the three-dimensional model based on the coordinate point on the camera image corresponding to the internal coordinate point of the one processing unit voxel determined by the determination unit.
  • a generation method of generating a three-dimensional model based on a plurality of camera images obtained using a plurality of cameras comprising: transforming, into two-dimensional coordinate points on a camera image, a plurality of representative coordinate points specified from one processing unit voxel of a plurality of processing voxels that are obtained by dividing a target three-dimensional space serving as a target of three-dimensional model generation; determining, by using transformation results of the plurality of representative coordinate points, a coordinate point on the camera image corresponding to an internal coordinate point of the one processing unit voxel; and generating the three-dimensional model based on the coordinate point on the camera image corresponding to the internal coordinate point of the one processing unit voxel that have been determined.
  • FIG. 1A is a block diagram showing an example of the functional arrangement of a three-dimensional model generation apparatus according to the first embodiment
  • FIG. 1B is a block diagram showing an example of the hardware arrangement of the three-dimensional model generation apparatus
  • FIG. 2 is a view showing an example of a system for obtaining input data
  • FIG. 3 is a view showing an example of a silhouette image
  • FIG. 4 is a schematic view expressing a target three-dimensional space, processing unit voxels, and model voxels;
  • FIG. 5 is a view exemplifying the results of transforming representative coordinate points of a processing unit voxel into coordinate points on a camera image
  • FIGS. 6A to 6C are views each showing the transformation of the vertices of a voxel into pixel coordinate points on a silhouette image and a search area;
  • FIG. 7 is a flowchart for expressing voxel determination processing
  • FIGS. 8A and 8B are flowcharts for expressing three-dimensional model generation processing
  • FIG. 9 is a block diagram showing an example of the functional arrangement of a three-dimensional model generation apparatus according to the second embodiment.
  • FIG. 10 is a view exemplifying the results of transforming representative coordinate points of a processing unit voxel into coordinate points on a camera image.
  • FIGS. 11A and 11B are views each showing coordinate points in the camera image and distances between the coordinate points that are used for approximate calculation.
  • a three-dimensional model generation apparatus generates, by the volume intersection method, a three-dimensional model based on a silhouette image of a target object to be modeled for each camera.
  • the calculation load is reduced by performing projective transformation on each representative coordinate point and obtaining each remaining arbitrary coordinate point by approximate calculation.
  • a voxel indicates a unit of division of a target three-dimensional space that is to be shot by a plurality of cameras.
  • Each voxel may be a cube, a cuboid, or another shape.
  • FIG. 1A is a block diagram showing an example of the functional arrangement of a three-dimensional model generation apparatus 10 according to the first embodiment.
  • FIG. 1B is a block diagram showing an example of the hardware arrangement of an information processing apparatus serving as the three-dimensional model generation apparatus 10 .
  • FIG. 2 is a view for explaining a shooting system for obtaining input data of the three-dimensional model generation apparatus 10 according to the first embodiment.
  • a plurality of cameras 11 to 18 capture images of a space including the target space for three-dimensional model generation.
  • the image data obtained by processing each camera image obtained by image capturing and the information of each camera are used as input data of the three-dimensional model generation apparatus 10 .
  • FIG. 2 although eight cameras are shown as the plurality of cameras, the number of cameras is not limited to this. Details of the input data will be described later.
  • the three-dimensional model generation apparatus 10 is formed by an information processing apparatus.
  • a general-purpose computer can be used as the information processing apparatus.
  • a CPU 21 controls the overall information processing apparatus by executing a program stored in a ROM 22 or a RAM 23 .
  • the CPU 21 implements each function as the three-dimensional model generation apparatus 10 (to be described later) by loading a predetermined program stored in a storage device 25 to the RAM 23 and executing the program loaded to the RAM 23 .
  • the ROM 22 is a read only non-volatile memory.
  • the RAM 23 is a memory that is readable and writable as needed.
  • a display device 24 performs various kinds of display under the control of the CPU 21 .
  • the storage device 25 is a large-capacity storage device formed from, for example, a hard disk or the like.
  • the storage device 25 can store the camera images obtained by the cameras 11 to 18 .
  • An input device 26 accepts various kinds of user inputs to the information processing apparatus.
  • An interface 27 is an interface for connecting to an external apparatus, and the plurality of cameras are connected to this interface in this embodiment.
  • the cameras 11 to 18 may be connected via a network such as a LAN.
  • the interface 27 functions as a network interface.
  • a bus 28 communicably connects the above-described components to each other.
  • each function of the three-dimensional model generation apparatus 10 to be described below is implemented by the CPU 21 executing a predetermined program, it is not limited to this.
  • each function of the three-dimensional model generation apparatus 10 may be implemented by cooperation between software and hardware such as a dedicated IC.
  • some or all of the functions may be implemented by hardware only.
  • FIG. 1A shows an example of the functional arrangement of the three-dimensional model generation apparatus 10 that generates a three-dimensional model based on a plurality of camera images obtained by using the plurality of cameras.
  • a target space setting unit 100 sets a three-dimensional space (target three-dimensional space) that is to be the target of three-dimensional model generation.
  • a cuboid ranging from ⁇ 2000 mm to 2000 mm of the x-axis, ⁇ 2000 mm to 2000 mm of the y-axis, and 0 mm to 3000 mm of the z-axis has been set as the target three-dimensional space.
  • the size of the target three-dimensional space is not limited to this.
  • the coordinate system expressing the coordinates of the set three-dimensional space is called a world coordinate system.
  • the coordinate range information of the set three-dimensional space is output to a processing unit voxel division unit 103 .
  • a user determines a target three-dimensional space by designating a range that can be captured by a plurality of cameras in a system for obtaining input data. However, it may be set so that a target three-dimensional space is determined automatically.
  • a camera parameter input unit 101 inputs parameters from each of the cameras 11 to 18 , which shot the target three-dimensional space, and outputs the camera parameters to a representative coordinate transformation unit 105 .
  • camera parameters are a set of parameters formed by including the three-dimensional position of a camera in the world coordinate system, an optical-axis direction vector, a camera screen lower direction vector, a focal length, and the image center.
  • all of the aforementioned parameters need not be always included in each set of camera parameters. For example, only the three-dimensional position of the camera and the optical-axis direction vector (or the orientation information of the camera) can be obtained as the camera parameters.
  • a silhouette image input unit 102 inputs the silhouette image of each camera image.
  • a silhouette image is an image identifying an object region and a region other than the object region by expressing the object region to be the target of model generation by using white pixels and expressing a background region other than the object region by using black pixels.
  • the arrangement of the white pixels and the black pixels of the object region and the background region, respectively, is not limited to this and may be reversed.
  • the silhouette image exemplified in FIG. 3 has been obtained by processing a camera image in which three people are present into a silhouette image.
  • a method of generating a silhouette image from a camera image there is, for example, a background subtraction method in which the object region and the background region are identified by comparing an image shot when an object is not present and an image shot when the object is present.
  • the method of generating a silhouette image is not limited to this, as a matter of course.
  • a method of detecting an object region by obtaining a correlation using a plurality of images captured at different times, an edge extraction method, or the like can be used.
  • the processing unit voxel division unit 103 divides the space set by the target space setting unit 100 into a plurality of processing unit voxels.
  • the information of each processing unit voxel is output to a representative coordinate determination unit 104 and a model voxel division unit 106 .
  • the information of each processing unit voxel includes the smallest x-, y-, and z-vertex coordinates and the length of each side.
  • the length of each side is set with the same value in all of the processing unit voxels and that each side of a processing unit voxel is arranged to be parallel to one of the x-, y-, and z-axes.
  • each processing unit voxel may be a preset size or may be determined in accordance with the size of the target three-dimensional space or the voxel size of each voxel which is to form the three-dimensional model to be ultimately output.
  • Each voxel that forms this three-dimensional model which is to be ultimately output is referred to as a model voxel.
  • the processing unit voxel is determined in accordance with the size of each model voxel which is to be output.
  • FIG. 4 is a schematic view expressing the target three-dimensional space set by the target space setting unit 100 , the processing unit voxels, and the model voxels.
  • the target three-dimensional space is divided into a processing unit voxel 201 and the like.
  • each processing unit voxel is a voxel in which the length of each side is 1000 mm, and the set target three-dimensional space is divided into quarters in the x-axis and y-axis directions and divided into thirds in the z-axis direction. Note that, however, the division count of the processing unit voxels is not limited to this.
  • the representative coordinate determination unit 104 specifies, in each processing unit voxel, a plurality of representative coordinate points to be used for transforming the three-dimensional coordinate points of the target space into coordinate points on a camera image.
  • an approximate calculation unit 107 calculates transformation coefficients for transforming movement amounts in the respective axis directions of the target three-dimensional space into movement amounts in the camera image.
  • representative coordinate points are obtained so as to include a pair of coordinate points in which only the x-coordinates differ, a pair of coordinate points in which only the y-coordinates differ, and a pair of coordinate points in which only the z-coordinates differ.
  • the following four points of a processing unit voxel are selected as the representative coordinate points.
  • this embodiment has described an example in which four points are selected among the vertices of the processing unit voxel, other points can be selected as well. For example, it may be set so that the four points are selected by using the center of gravity of the processing unit voxel instead of the vertices of the processing unit voxel, but it is not limited to this.
  • the determined representative coordinate points are output to the representative coordinate transformation unit 105 .
  • the representative coordinate transformation unit 105 transforms, based on the camera parameters, the input representative coordinate points (coordinate points in the target three-dimensional space) of the processing unit voxel into coordinate points (two-dimensional coordinate points) of a camera image shot by each camera.
  • a coordinate system expressing the target three-dimensional space will be represented as a world coordinate system hereinafter.
  • each coordinate point in the world coordinate system of the target three-dimensional space is represented by coordinate values ranging from ( ⁇ 2000, ⁇ 2000, 0) and (2000, 2000, 3000).
  • the aforementioned camera parameters indicate the position and the orientation of each camera in the world coordinate system.
  • the representative coordinate transformation unit 105 calculates coordinate points of the plurality of representative coordinate points in the target three-dimensional space by matrix calculation and perspective projection transformation.
  • the representative coordinate transformation unit 105 transforms coordinate values (Xw, Yw, Zw) of the world coordinate system into coordinate values (Xc, Yc, Zc) of the camera coordinate system by using a rotation matrix R and a translation vector t derived from the camera parameters.
  • the above-described equation (1) can be used for this transformation.
  • the representative coordinate transformation unit 105 transforms the three-dimensional coordinate point (Xc, Yc, Zc) of the camera coordinate system into a two-dimensional coordinate point (u, v) of a camera image by perspective projection transformation.
  • the above-described equation (2) can be used for this transformation.
  • the representative coordinate transformation unit 105 outputs each representative coordinate point and the information of the corresponding transformed coordinate point in the camera image to the approximate calculation unit 107 .
  • the model voxel division unit 106 divides each processing unit voxel into model voxels of a three-dimensional model that has been set in advance.
  • the model voxels are also arranged to be parallel to the x-, y-, and z-axes in the same manner as the processing unit voxels.
  • Each processing voxel is filled with the model voxels without any gaps between the model voxels. That is, the processing voxel is divided into model voxels without any gaps.
  • the model voxel division unit 106 outputs the voxel group information after the division to the approximate calculation unit 107 .
  • the voxel group information includes the smallest x-, y-, and z-vertex coordinates, the length of each side of the voxel, and the size of each side of each model voxel.
  • FIG. 4 shows how the processing unit voxel 201 , which is a 1000 mm cube, is divided into model voxels, each of which is a 100 mm cube. Hence, one processing unit voxel is divided into a total of 1000 model voxels. Note that although all of the model voxels have not been described for the sake of descriptive convenience in FIG. 4 , all of the regions of the processing unit voxels will be divided into the model voxels in practice.
  • the approximate calculation unit 107 Based on the positional relationships of the plurality of representative coordinate points changing from the positional relationships in the target three-dimensional space to the positional relationships in a camera image, the approximate calculation unit 107 performs approximate calculation of determining the coordinate points in the camera image that correspond to the internal coordinate points of one processing voxel.
  • the approximate calculation according to this embodiment will be described hereinafter.
  • the approximate calculation unit 107 calculates, based on the representative coordinate points and the information of the corresponding coordinate points in a camera image that have been received from the representative coordinate transformation unit 105 , the transformation coefficients between the movement amounts of the three-dimensional coordinate points in the world coordinate system and pixel movement amounts in a camera image.
  • the transformation coefficients are calculated targeting every camera that is to shoot the target space.
  • FIG. 5 is a view exemplifying the results of transforming the representative coordinate points of a processing unit voxel into coordinate points on a camera image. A method of calculating the transformation coefficient between each of the movement amounts of three-dimensional coordinate points in the world coordinate system and each of the pixel movement amounts in a camera image will be described with reference to FIG. 5 hereinafter.
  • the approximate calculation unit 107 calculates, from the transformation results of the plurality of representative coordinate points, the transformation coefficient of the movement amount from a u-coordinate to a v-coordinate in a camera image for each of the movement amounts of the x-, y-, and z-coordinates in the target three-dimensional space.
  • the approximate calculation unit 107 uses each of the calculated transformation coefficients to calculate (to approximate) the coordinate point in the camera image that corresponds to the arbitrary coordinate point in the target three-dimensional space. This operation will be described in detail below.
  • the result obtained from transforming a representative coordinate point 300 of a processing unit voxel is a transformed coordinate point 400 on the camera image.
  • a representative coordinate point 301 corresponds to a transformed coordinate point 401
  • a representative coordinate point 302 corresponds to a transformed coordinate point 402
  • a representative coordinate point 303 corresponds to a transformed coordinate point 403 .
  • the representative coordinate point 300 (X 1 , Y 1 , Z 1 ) and the coordinate values (X 2 , Y 1 , Z 1 ) of the representative coordinate point 301 only the values of the x-axis differ from each other, and the values of the y-axis and the z-axis are the same.
  • the difference between the coordinate values of the transformed coordinate point 401 (ux, vx) and that of the transformed coordinate point 400 (ub, vb) is a difference generated from movement in the x-axis direction in the three-dimensional space.
  • the relationship between the movement amount in the x-axis direction in the three-dimensional space and the coordinate points on the camera image can be calculated.
  • Pixel movement amounts dux and dvx on the camera image for each 1 mm of the movement amount on the x-axis in the three-dimensional space are calculated by
  • pixel movement amounts duy and dvy on the camera image for each 1 mm of the movement amount on the y-axis in the three-dimensional space are calculated from the coordinate values of the representative coordinate point 300 and the representative coordinate point 302 and the coordinate values of the transformed coordinate point 400 and the transformed coordinate point 402 by
  • duy ( uy - ub ) ( Y ⁇ ⁇ 2 - Y ⁇ ⁇ 1 ) ( 5 )
  • dvy ( vy - vb ) ( Y ⁇ ⁇ 2 - Y ⁇ ⁇ 1 ) ( 6 )
  • pixel movement amounts duz and dvz on the camera image for each 1 mm of the movement amount on the z-axis in the three-dimensional space are calculated from the coordinate values of the representative coordinate point 300 and the representative coordinate point 303 and the coordinate values of the transformed coordinate point 400 and the transformed coordinate point 403 by
  • (X, Y, Z) is a coordinate point satisfying (X 1 ⁇ X ⁇ X 2 ), (Y 1 ⁇ X ⁇ Y 2 ), and (Z 1 ⁇ Z ⁇ Z 2 ).
  • the calculation cost of calculating the pixel movements will increase in comparison to that of the conventional art, it is possible to perform transformation by three multiplications.
  • the approximate calculation unit 107 performs the above-described calculation and transforms the coordinate values of eight vertices of each model voxel, serving as positions related to the model voxel, into coordinate points in each camera image.
  • the approximate calculation unit outputs the coordinate values of each camera image to an inside/outside determination unit 108 .
  • the method of transforming the coordinate points in a voxel is not limited to the above-described approximate calculation, and various kinds of modifications are applicable. That is, the transformation method can be set so that coordinate points in the camera image are derived from the coordinate points in a voxel of the target three-dimension image based on the positional relationships of the representative coordinate points of the voxel in the target three-dimensional space and the positional relationships of the representative coordinate points of the voxel in the camera image.
  • the inside/outside determination unit 108 determines whether a model voxel in each camera image is within the silhouette of the model generation target object.
  • FIGS. 6A to 6C are views exemplifying the results from transforming the eight vertices of each model voxel to silhouette images.
  • the inside/outside determination unit 108 compares the transformed coordinate points and extracts the largest u- and v-coordinate point and the smallest u- and v-coordinate point.
  • a rectangular region that includes the largest u- and v-coordinate point and the smallest u- and v-coordinate point is set as the search region of a silhouette image. In FIG.
  • the inside/outside determination unit 108 sets a search region 500 from the coordinate points related to the model voxel.
  • the inside/outside determination unit 108 searches whether a white pixel (silhouette pixel) is present in this search region 500 . If a white pixel is present, the inside/outside determination unit determines that the model voxel is inside the silhouette. If a white pixel is not present, the model voxel is determined to be outside the silhouette.
  • the inside/outside determination unit 108 determines a search region 501 by using the largest coordinate values of the silhouette image. Also, if all of the transformed coordinate points are outside the silhouette image, as shown in FIG. 6C , the inside/outside determination unit 108 excludes the search area as a determination target since the coordinate points are outside the image capturing range of the camera. Subsequently, the inside/outside determination unit 108 counts the cameras that are determined to be inside the silhouette and the cameras that are determined to be outside the silhouette for that model voxel. The number of cameras inside the silhouette and the number of cameras outside the silhouette are output, as the determination result, to a three-dimensional model determination unit 109 .
  • the inside/outside silhouette determination may be performed by using only one model voxel coordinate point, transforming the one coordinate point into a camera image, and determining whether the coordinate point of the camera image is a white pixel.
  • one predetermined coordinate point among the vertices of the model voxel may be used or the center of gravity of the model voxel may be used as the one predetermined coordinate point.
  • the three-dimensional model determination unit 109 determines, based on the result of inside/outside silhouette determination by the inside/outside determination unit 108 , whether the target model voxel is a voxel forming the model.
  • the three-dimensional model determination unit outputs, to a three-dimensional model generation unit 110 , the coordinate values of the model voxel which has been determined to be a voxel forming the model.
  • Model voxel determination processing performed by the three-dimensional model determination unit 109 according to the first embodiment will be described below in accordance with the flowchart of FIG. 7 .
  • the three-dimensional model determination unit 109 determines whether the total number of cameras that are determined be outside the silhouette is equal to or more than a first threshold (step S 700 ). If it is determined that the number of cameras outside the silhouette is equal to or more than the first threshold (YES in step S 700 ), the three-dimensional model determination unit 109 determines that the model voxel is not a voxel forming the three-dimensional model (step S 701 ). On the other hand, if it is not determined that that the number of cameras outside the silhouette is equal to or more than the first threshold (NO in step S 700 ), the three-dimensional model determination unit 109 determines whether the number of cameras that are determined to be inside the silhouette is equal to or more than a threshold (step S 702 ).
  • step S 702 If the number of cameras determined to be inside the silhouette is less than the threshold (NO in step S 702 ), the three-dimensional model determination unit 109 determines that the model voxel is not a voxel forming the three-dimensional model (step S 701 ). If the number of cameras determined to be inside the silhouette is equal to or more than the threshold (YES in step S 701 ), the three-dimensional model determination unit 109 determines that the model voxel is a voxel forming the three-dimensional model (step S 703 ).
  • one is set as the first threshold for the number of cameras determined to be outside the silhouette in step S 700
  • two is set as the second threshold for the number of cameras determined to be inside the silhouette in step S 702 .
  • the first and second thresholds are not limited those described above.
  • the thresholds may be set in accordance with the shooting environment and the number of cameras. Also in the determination as to whether a model voxel is a voxel forming the three-dimensional model, it may be set so that only the number of cameras determined to be inside the silhouette will be used or it may be set so that only the number of cameras determined to be outside the silhouette will be used.
  • the three-dimensional model generation unit 110 outputs the coordinate values of the voxel that has been determined by the three-dimensional model determination unit 109 to be a model voxel forming the model.
  • an aggregate of points is used as the output format and that the coordinate values of each point indicate the center-of-gravity coordinate point of a model voxel.
  • the coordinate representation expresses the numerical values of the x-, y-, and z-coordinates in mm.
  • the center of gravity of a model voxel has been used as the coordinate values of the model voxel to be output, but it is not limited to this.
  • one of the vertices of the model voxel may be used.
  • the unit of coordinate representation is not limited to mm.
  • an aggregate of points has been assumed as an output format here, a format such as a mesh model or a polygon may also be used.
  • the three-dimensional model generation processing performed by the three-dimensional model generation apparatus 10 will be described below in accordance with the flowcharts of FIGS. 8A and 8B .
  • the target space setting unit 100 sets the target three-dimensional space for model generation (step S 800 ).
  • the processing unit voxel division unit 103 divides the set target three-dimensional space into processing unit voxels and inputs the processing unit voxels to the representative coordinate determination unit 104 and the model voxel division unit 106 (step S 801 ).
  • the processes of steps S 802 to S 808 are executed for each processing voxel hereinafter.
  • the representative coordinate determination unit 104 determines the representative coordinate points of each processing voxel and outputs the determined representative coordinate points to the representative coordinate transformation unit 105 (step S 802 ).
  • the representative coordinate transformation unit 105 transforms, calculating the equations (1) and (2), the representative coordinate points into the coordinate points of each camera image and outputs the representative coordinate points and the information of the transformed coordinate points in each camera image to the approximate calculation unit 107 (step S 803 ).
  • the approximate calculation unit 107 calculates, by calculating the equations (3) to (8), the transformation coefficients between the movement amounts of the three-dimensional coordinate points in the world coordinate system and the pixel movement amounts in each camera image from the representative coordinate points and the information of the transformed coordinate points in each camera image (step S 804 ).
  • the model voxel division unit 106 divides the processing unit voxel into model voxels and outputs the information of the divided model voxels to the approximate calculation unit 107 (step S 805 ).
  • steps S 806 to S 808 are executed for each model voxel, for all of the model voxels. Also, the processes of steps S 806 and step S 807 of the above processes are performed for each camera, for all of the cameras.
  • the approximate calculation unit 107 calculates, of the coordinate points of the model voxel, each transformed coordinate point on a camera image by the approximate calculation indicated in (9) and (10) (step S 806 ).
  • the inside/outside determination unit 108 performs, for each camera, inside/outside silhouette determination by searching for a search region determined from the coordinate points on the camera image after the transformation.
  • the inside/outside determination unit outputs the number of cameras that are determined to be inside the silhouette and the number of cameras that are determined to be outside the silhouette to the three-dimensional model determination unit 109 (step S 807 ).
  • the three-dimensional model determination unit 109 determines, based on the result from the inside/outside silhouette determination of the model voxel, whether the model voxel is a model voxel that forms the three-dimensional model (step S 808 ).
  • the three-dimensional model generation unit 110 outputs the model voxel, which has been determined to form the model and serves as an aggregate of points, as the three-dimensional model (step S 809 ).
  • the calculation load can be greatly reduced.
  • the volume intersection method when the three-dimensional coordinate points are transformed into coordinate positions on a camera image, only the representative coordinate points undergo projective transformation and the coordinate positions on the camera image are obtained by approximate calculation for each remaining three-dimensional coordinate point. Therefore, it is possible to reduce the calculation load when constructing a three-dimensional model by the volume intersection method.
  • the calculation error becomes large compared to that of the matrix calculation as shown by the equations (1) and (2).
  • the influence of this calculation error on the accuracy of the three-dimensional model to be generated can increase depending on the size of each processing unit voxel and the size of each model voxel. The larger the size of the processing unit voxel is with respect to the model voxel, the larger the calculation error is, thereby increasing the influence of the calculation error on the accuracy of the three-dimensional model.
  • the load of calculating the transformation coefficients may become heavier, and this may slow down the processing speed compared to that of a case in which normal matrix calculation is performed.
  • whether to execute coordinate transformation by approximate calculation or by matrix calculation is determined, in consideration of the influence from a calculation error and the effect of the increased processing speed, in accordance with the size of each processing unit voxel and each model voxel which is ultimately output. That is, according to the second embodiment, whether to perform coordinate transformation by approximate calculation or matrix calculation is suitably switched in accordance with the size of a voxel to be processed.
  • FIG. 9 is a block diagram showing an example of the functional arrangement of a three-dimensional model generation apparatus 10 according to the second embodiment. Blocks having the same functions as those of the first embodiment ( FIG. 1A ) are denoted by the same reference numerals.
  • a method selection unit 111 obtains the information of a processing unit voxel from the processing unit voxel division unit 103 and the information of a model voxel from the model voxel division unit 106 .
  • the method selection unit 111 selects, in accordance with the ratio between the size of the processing unit voxel and the size of the model voxel, to execute coordinate transformation by approximate calculation or by matrix calculation which allows an accurate transformation result to be obtained. The details of the selection method will be described later.
  • the method selection unit 111 selects to execute approximate calculation, the information of the processing unit voxel is output to a representative coordinate determination unit 104 and the information of the model voxel is output to the approximate calculation unit 107 .
  • the method selection unit 111 selects to execute matrix calculation, the information of the model voxel is output to a matrix calculation unit 112 .
  • the matrix calculation unit 112 obtains the camera parameters of each camera from a camera parameter input unit 101 , obtains the information of the model voxel from the method selection unit 111 , and transforms, by calculating the equations (1) and (2), the coordinate points of the respective eight vertices of the model voxel into coordinate points on each camera image. That is, the matrix calculation unit 112 executes matrix calculation of transforming the coordinate points of a target three-dimensional space into coordinate points on a camera image by matrix transformation and perspective projection transformation. The matrix calculation unit 112 outputs the converted coordinate values to an inside/outside determination unit 108 .
  • inside/outside silhouette determination by the inside/outside determination unit 108 and model voxel determination by a three-dimensional model determination unit 109 are performed, and the three-dimensional model is generated by a three-dimensional model generation unit 110 .
  • the method selection unit 111 selects one of the approximate calculation and the matrix calculation based on the relationship between the size of the processing unit voxel and the size of the model voxel. More specifically, if the ratio between the size of the processing unit voxel and the size of the model voxel is within an upper threshold and a lower threshold that are preset, the approximate calculation is selected. If the ratio between the size of the processing unit voxel and the size of the model voxel is larger than the upper threshold or smaller than the lower threshold, the method selection unit 111 selects the matrix calculation.
  • the matrix calculation is selected. If the difference between the sizes of the voxels is small, the effect of increased processing speed by the approximate calculation will be small, and this may, on the contrary, increase the load in some cases. Hence, even in a case in which the aforementioned ratio is lower than the lower threshold, the matrix calculation is selected.
  • the second embodiment showed a method in which an upper threshold and a lower threshold are used, it may be arranged so that only one of the upper threshold and the lower threshold is set as a threshold, and the calculation method may be selected based on whether the ratio exceeds this threshold. Also, the calculation method may be selected based on whether the size of the processing unit voxel or the size of the model voxel is larger than a predetermined size.
  • coordinate transformation can be performed by approximate calculation. That is, the approximate calculation and the matrix calculation can be used suitably, and it is possible to generate a three-dimensional model with high accuracy and high speed.
  • a transformation coefficient is determined based on the assumption that the movement amount in each axis direction in a world coordinate system is equally reflected on the movement amount in each axis direction in a camera image.
  • a higher-accuracy approximate calculation is performed by considering the deformation of an object region when a target three-dimensional space is projected on a camera image. More specifically, the change in the length of each line segment in a camera image when a predetermined line segment is moved parallel between representative coordinate points in the target three-dimensional space is added to each transformation coefficient of the first embodiment.
  • the arrangement of a three-dimensional model generation apparatus 10 according to the third embodiment is the same as that of the first embodiment ( FIG. 1A ). However, the operations of a representative coordinate determination unit 104 and an approximate calculation unit 107 are different from those of the first embodiment.
  • FIG. 10 is a view exemplifying the results from transforming representative coordinate points of a processing unit voxel into coordinate points on a camera image according to the third embodiment.
  • the representative coordinate determination unit 104 determined four points, among the vertices of the processing unit voxel, as the representative coordinate points.
  • the representative coordinate determination unit 104 determines, among the vertices of the processing unit voxel, the following six points as the representative coordinate points.
  • a point with the smallest x-, y-, and z-values (a representative coordinate point 600 and a transformed coordinate point 700 in FIG. 10 ).
  • a point with the smallest x- and z-values and the largest y-value (a representative coordinate point 602 and a transformed coordinate point 702 in FIG. 10 ).
  • a point with the smallest x- and y-values and the largest z-value (a representative coordinate point 603 and a transformed coordinate point 703 in FIG. 10 ).
  • a point with the smallest y-value and the largest x- and z-values (a representative coordinate point 604 and a transformed coordinate point 704 in FIG. 10 ).
  • a point with the smallest z-value and the largest x- and y-values (a representative coordinate point 605 and a transformed coordinate point 705 in FIG. 10 ).
  • the representative coordinate points are not limited to the coordinate points of the vertices of the processing unit voxel.
  • FIGS. 11A and 11B are views each showing coordinate points on a camera image used for approximate calculation and distances between the respective coordinate points.
  • the approximate calculation unit 107 calculates, from the transformation results of the representative coordinate points, distances Lx 1 , Lx 2 , Ly 1 , Ly 2 , Lz 1 , and Lz 2 between the coordinate points shown in FIGS. 11A and 11B .
  • the approximate calculation unit 107 transforms an arbitrary coordinate point of the three-dimensional coordinate points into a coordinate point of the camera image by using the following calculations.
  • dux ( ux - ub ) ⁇ dx ⁇ Lx ⁇ ⁇ 1 ( Lx ⁇ ⁇ 1 ⁇ dx ) + ( Lx ⁇ ⁇ 2 ⁇ ( X ⁇ ⁇ 2 - X ⁇ ⁇ 1 - dx ) ) ( 11 )
  • dvx ( vx - vb ) ⁇ dx ⁇ Lx ⁇ ⁇ 1 ( Lx ⁇ ⁇ 1 ⁇ dx ) + ( Lx ⁇ ⁇ 2 ⁇ ( X ⁇ ⁇ 2 - X ⁇ ⁇ 1 - dx ) ) ( 12 )
  • duy ( uy - ub ) ⁇ dy ⁇ Ly ⁇ ⁇ 1 ( Ly ⁇ ⁇ 1 ⁇ dy ) + ( Ly ⁇ ⁇ 2 ⁇ ( Y ⁇ ⁇ 2 - Y ⁇ ⁇ 1 - dy ) ) ( 13 )
  • dvy ( vy - vb ) ⁇ dy ⁇ Ly ⁇ ⁇ 1 ( Ly ⁇ ⁇ 1 ⁇ dy ) + ( Ly ⁇ ⁇ 2 ⁇ ( Y ⁇ ⁇ 2 - Y ⁇ ⁇ 1 - dy ) ) ( 14 )
  • duz ( uz - ub ) ⁇ dz ⁇ Lz ⁇ ⁇ 1 ( Lz ⁇ ⁇ 1 ⁇ dz ) + ( Lz ⁇ ⁇ 2 ⁇ ( Z ⁇ ⁇ 2 - Z ⁇ ⁇ 1 - dz ) ) ( 15 )
  • dvz ( vz - vb ) ⁇ dz ⁇ Lz ⁇ ⁇ 1 ( Lz ⁇ ⁇ 1 ⁇ dz ) + ( Lz ⁇ ⁇ 2 ⁇ ( Z ⁇ ⁇ 2 - Z ⁇ ⁇ 1 - dz ) ) ( 16 )
  • the results of these calculations can be used to transform an arbitrary coordinate point 304 (X, Y, Z) of the target three-dimensional space into a coordinate point (u, v) on the camera image by
  • the third embodiment compared to the first embodiment, since approximate calculation of coordinate transformation is performed by adding the degree of deformation after the transformation of the three-dimensional coordinate points, a higher-accuracy coordinate transformation can be performed. That is, compared to the first embodiment, the quality of the three-dimensional model to be generated improves.
  • the second embodiment can be applied so that whether to use the approximate calculation or the matrix calculation of the third embodiment is selected based on the size of the processing unit voxel and the size of the model voxel. Also, in each of the above-described embodiments, after processing in a model voxel is completed, it is possible to set the model voxel as a processing unit voxel, further divide the model voxel, and repetitively perform processing. According to the above described embodiments, it is possible to reduce the calculation load at the time of three-dimensional model generation.
  • the approximate calculation unit 107 can calculate a two-dimensional coordinate point of a non-representative coordinate point by performing linear interpolation to the two-dimensional coordinate point on a camera image of the representative coordinate point. This allows a three-dimensional model to be generated with a lighter load. It may also be set so that the user can designate a specific method among the plurality of approximate calculation methods described in the aforementioned embodiments. As a result, it will be possible to implement three-dimensional model generation suited to the desired three-dimensional model accuracy and the calculation resource amount.
  • Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s).
  • computer executable instructions e.g., one or more programs
  • a storage medium which may also be referred to more fully as a
  • the computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions.
  • the computer executable instructions may be provided to the computer, for example, from a network or the storage medium.
  • the storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)TM), a flash memory device, a memory card, and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Generation (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

An information processing apparatus that generates a three-dimensional model based on a plurality of camera images obtained using a plurality of cameras, performs: transforming, into two-dimensional coordinate points on a camera image, a plurality of representative coordinate points specified from one processing unit voxel of a plurality of processing unit voxels that are obtained by dividing a target three-dimensional space serving as a target of three-dimensional model generation; determining, by using transformation results of the plurality of representative coordinate points, a coordinate point on the camera image corresponding to an internal coordinate point of the one processing unit voxel; and generating the three-dimensional model based on the determined coordinate point on the camera image corresponding to the internal coordinate point of the one processing unit voxel.

Description

    BACKGROUND OF THE INVENTION Field of the Invention
  • The present invention relates to an information processing apparatus that generates a three-dimensional model based on images obtained from a plurality of cameras and a method of generating a three-dimensional model.
  • Description of the Related Art
  • As a method of generating a three-dimensional model, the volume intersection method is known. In “Virtual View Generation for 3D Digital Video” (IEEE MULTIMEDIA Vol. 4 No. 1 pp. 18-26, 1997), a measurement target space shot by a plurality of cameras is divided into small cubes or cuboids (to be referred to as voxels hereinafter). Furthermore, each voxel is geometrically transformed and projected onto a camera image, and it is determined whether the voxel is inside the silhouette of a modeling target object in the camera image. If it is determined that the voxel is inside the silhouette in all of the camera images, the voxel is registered as a voxel forming the target object. When the determination is completed for each voxel, the voxels that have been registered as voxels forming the target object are output as a three-dimensional model.
  • However, when voxels are to be geometrically transformed and to be projected onto each camera image in the manner shown in “Virtual View Generation for 3D Digital Video” (IEEE MULTIMEDIA Vol. 4 No. 1 pp. 18-26, 1997), it requires an extremely large calculation load as it will be described below. Hence, it needs an extremely long time to generate a model. In addition, the cost of the apparatus increases when a model is to be generated at high speed.
  • Assume that the three-dimensional space of the measurement target space is a world coordinate system, and the optical-axis direction of a camera is the z-axis, the vertically upward direction of the camera is the y-axis, and the right hand direction of the camera is the x-axis in a camera coordinate system. To transform the coordinate values of the world coordinate system into the coordinate values of a camera image, calculation by the following equations (1) and (2) is generally required. First, coordinate values (Xw, Yw, Zw) of the world coordinate system are transformed into coordinate values (Xc, Yc, Zc) of the camera coordinate system by
  • ( Xc Yc Zc ) = R ( Xw Yw Zw ) + t ( 1 )
  • wherein a rotation matrix R is determined by the orientation of the camera in the world coordinate system, and a translation vector t is determined by the position of the camera in the world coordinate system. Note that the rotation matrix R is a 3×3 matrix.
  • Next, the three-dimensional coordinate point (Xc, Yc, Zc) of the camera coordinate system is transformed into a two-dimensional coordinate point (u, v) on a camera image by perspective projection transformation by
  • ( u v ) = ( Xc / Zc Yc / Zc ) ( 2 )
  • As described above, in order to obtain a coordinate point of the camera coordinate system from a coordinate point of the world coordinate system, nine multiplications are required in the equation (1) and two divisions are required in the equation (2). In this manner, multiplication and division are required for each of the x-, y-, and z-coordinates in order to transform a three-dimensional coordinate point into a coordinate point on a camera image, thereby increasing the cost.
  • SUMMARY OF THE INVENTION
  • According to one aspect of the present invention, there is provided an information processing apparatus that generates a three-dimensional model based on a plurality of camera images obtained using a plurality of cameras, comprising: a transformation unit configured to transform, into two-dimensional coordinate points on a camera image, a plurality of representative coordinate points specified from one processing unit voxel of a plurality of processing unit voxels that are obtained by dividing a target three-dimensional space serving as a target of three-dimensional model generation; a determination unit configured to determine, by using transformation results of the plurality of representative coordinate points by the transformation unit, a coordinate point on the camera image corresponding to an internal coordinate point of the one processing unit voxel; and a generation unit configured to generate the three-dimensional model based on the coordinate point on the camera image corresponding to the internal coordinate point of the one processing unit voxel determined by the determination unit.
  • According to another aspect of the present invention, there is provided a generation method of generating a three-dimensional model based on a plurality of camera images obtained using a plurality of cameras, the method comprising: transforming, into two-dimensional coordinate points on a camera image, a plurality of representative coordinate points specified from one processing unit voxel of a plurality of processing voxels that are obtained by dividing a target three-dimensional space serving as a target of three-dimensional model generation; determining, by using transformation results of the plurality of representative coordinate points, a coordinate point on the camera image corresponding to an internal coordinate point of the one processing unit voxel; and generating the three-dimensional model based on the coordinate point on the camera image corresponding to the internal coordinate point of the one processing unit voxel that have been determined.
  • Further features of the present invention will become apparent from the following description of exemplary embodiments with reference to the attached drawings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1A is a block diagram showing an example of the functional arrangement of a three-dimensional model generation apparatus according to the first embodiment;
  • FIG. 1B is a block diagram showing an example of the hardware arrangement of the three-dimensional model generation apparatus;
  • FIG. 2 is a view showing an example of a system for obtaining input data;
  • FIG. 3 is a view showing an example of a silhouette image;
  • FIG. 4 is a schematic view expressing a target three-dimensional space, processing unit voxels, and model voxels;
  • FIG. 5 is a view exemplifying the results of transforming representative coordinate points of a processing unit voxel into coordinate points on a camera image;
  • FIGS. 6A to 6C are views each showing the transformation of the vertices of a voxel into pixel coordinate points on a silhouette image and a search area;
  • FIG. 7 is a flowchart for expressing voxel determination processing;
  • FIGS. 8A and 8B are flowcharts for expressing three-dimensional model generation processing;
  • FIG. 9 is a block diagram showing an example of the functional arrangement of a three-dimensional model generation apparatus according to the second embodiment;
  • FIG. 10 is a view exemplifying the results of transforming representative coordinate points of a processing unit voxel into coordinate points on a camera image; and
  • FIGS. 11A and 11B are views each showing coordinate points in the camera image and distances between the coordinate points that are used for approximate calculation.
  • DESCRIPTION OF THE EMBODIMENTS
  • Embodiments of the present invention will be described hereinafter with reference to the accompanying drawings.
  • First Embodiment
  • A three-dimensional model generation apparatus according to the first embodiment generates, by the volume intersection method, a three-dimensional model based on a silhouette image of a target object to be modeled for each camera. In the first embodiment, when transforming the three-dimensional coordinate points of each voxel into corresponding pixel coordinate points on a silhouette image of each camera, the calculation load is reduced by performing projective transformation on each representative coordinate point and obtaining each remaining arbitrary coordinate point by approximate calculation. Note that in this embodiment, a voxel indicates a unit of division of a target three-dimensional space that is to be shot by a plurality of cameras. Each voxel may be a cube, a cuboid, or another shape.
  • FIG. 1A is a block diagram showing an example of the functional arrangement of a three-dimensional model generation apparatus 10 according to the first embodiment. FIG. 1B is a block diagram showing an example of the hardware arrangement of an information processing apparatus serving as the three-dimensional model generation apparatus 10. FIG. 2 is a view for explaining a shooting system for obtaining input data of the three-dimensional model generation apparatus 10 according to the first embodiment.
  • As shown in FIG. 2, a plurality of cameras 11 to 18 capture images of a space including the target space for three-dimensional model generation. The image data obtained by processing each camera image obtained by image capturing and the information of each camera are used as input data of the three-dimensional model generation apparatus 10. Note that in FIG. 2, although eight cameras are shown as the plurality of cameras, the number of cameras is not limited to this. Details of the input data will be described later.
  • As shown in FIG. 1B, the three-dimensional model generation apparatus 10 according to this embodiment is formed by an information processing apparatus. A general-purpose computer can be used as the information processing apparatus. A CPU 21 controls the overall information processing apparatus by executing a program stored in a ROM 22 or a RAM 23. For example, the CPU 21 implements each function as the three-dimensional model generation apparatus 10 (to be described later) by loading a predetermined program stored in a storage device 25 to the RAM 23 and executing the program loaded to the RAM 23. The ROM 22 is a read only non-volatile memory. The RAM 23 is a memory that is readable and writable as needed. A display device 24 performs various kinds of display under the control of the CPU 21.
  • The storage device 25 is a large-capacity storage device formed from, for example, a hard disk or the like. The storage device 25 can store the camera images obtained by the cameras 11 to 18. An input device 26 accepts various kinds of user inputs to the information processing apparatus. An interface 27 is an interface for connecting to an external apparatus, and the plurality of cameras are connected to this interface in this embodiment. Note that the cameras 11 to 18 may be connected via a network such as a LAN. In this case, the interface 27 functions as a network interface. A bus 28 communicably connects the above-described components to each other. Note that in this embodiment, although each function of the three-dimensional model generation apparatus 10 to be described below is implemented by the CPU 21 executing a predetermined program, it is not limited to this. For example, each function of the three-dimensional model generation apparatus 10 may be implemented by cooperation between software and hardware such as a dedicated IC. Alternatively, some or all of the functions may be implemented by hardware only.
  • FIG. 1A shows an example of the functional arrangement of the three-dimensional model generation apparatus 10 that generates a three-dimensional model based on a plurality of camera images obtained by using the plurality of cameras. In FIG. 1A, a target space setting unit 100 sets a three-dimensional space (target three-dimensional space) that is to be the target of three-dimensional model generation. For example, in FIG. 2, a cuboid ranging from −2000 mm to 2000 mm of the x-axis, −2000 mm to 2000 mm of the y-axis, and 0 mm to 3000 mm of the z-axis has been set as the target three-dimensional space. However, the size of the target three-dimensional space is not limited to this. The coordinate system expressing the coordinates of the set three-dimensional space is called a world coordinate system. The coordinate range information of the set three-dimensional space is output to a processing unit voxel division unit 103. Note that a user determines a target three-dimensional space by designating a range that can be captured by a plurality of cameras in a system for obtaining input data. However, it may be set so that a target three-dimensional space is determined automatically.
  • A camera parameter input unit 101 inputs parameters from each of the cameras 11 to 18, which shot the target three-dimensional space, and outputs the camera parameters to a representative coordinate transformation unit 105. Here, camera parameters are a set of parameters formed by including the three-dimensional position of a camera in the world coordinate system, an optical-axis direction vector, a camera screen lower direction vector, a focal length, and the image center. However, all of the aforementioned parameters need not be always included in each set of camera parameters. For example, only the three-dimensional position of the camera and the optical-axis direction vector (or the orientation information of the camera) can be obtained as the camera parameters.
  • A silhouette image input unit 102 inputs the silhouette image of each camera image. A silhouette image is an image identifying an object region and a region other than the object region by expressing the object region to be the target of model generation by using white pixels and expressing a background region other than the object region by using black pixels. However, the arrangement of the white pixels and the black pixels of the object region and the background region, respectively, is not limited to this and may be reversed. The silhouette image exemplified in FIG. 3 has been obtained by processing a camera image in which three people are present into a silhouette image. As a method of generating a silhouette image from a camera image, there is, for example, a background subtraction method in which the object region and the background region are identified by comparing an image shot when an object is not present and an image shot when the object is present. The method of generating a silhouette image is not limited to this, as a matter of course. A method of detecting an object region by obtaining a correlation using a plurality of images captured at different times, an edge extraction method, or the like can be used.
  • The processing unit voxel division unit 103 divides the space set by the target space setting unit 100 into a plurality of processing unit voxels. The information of each processing unit voxel is output to a representative coordinate determination unit 104 and a model voxel division unit 106. Here, the information of each processing unit voxel includes the smallest x-, y-, and z-vertex coordinates and the length of each side. However, in this embodiment, assume that the length of each side is set with the same value in all of the processing unit voxels and that each side of a processing unit voxel is arranged to be parallel to one of the x-, y-, and z-axes. The size of each processing unit voxel may be a preset size or may be determined in accordance with the size of the target three-dimensional space or the voxel size of each voxel which is to form the three-dimensional model to be ultimately output. Each voxel that forms this three-dimensional model which is to be ultimately output is referred to as a model voxel. In this embodiment, assume that the processing unit voxel is determined in accordance with the size of each model voxel which is to be output.
  • FIG. 4 is a schematic view expressing the target three-dimensional space set by the target space setting unit 100, the processing unit voxels, and the model voxels. In FIG. 4, the target three-dimensional space is divided into a processing unit voxel 201 and the like. Here, each processing unit voxel is a voxel in which the length of each side is 1000 mm, and the set target three-dimensional space is divided into quarters in the x-axis and y-axis directions and divided into thirds in the z-axis direction. Note that, however, the division count of the processing unit voxels is not limited to this.
  • The representative coordinate determination unit 104 specifies, in each processing unit voxel, a plurality of representative coordinate points to be used for transforming the three-dimensional coordinate points of the target space into coordinate points on a camera image. As will be described below, an approximate calculation unit 107 calculates transformation coefficients for transforming movement amounts in the respective axis directions of the target three-dimensional space into movement amounts in the camera image. Hence, representative coordinate points are obtained so as to include a pair of coordinate points in which only the x-coordinates differ, a pair of coordinate points in which only the y-coordinates differ, and a pair of coordinate points in which only the z-coordinates differ. In this embodiment, the following four points of a processing unit voxel are selected as the representative coordinate points.
      • (1) A point with the smallest x-, y-, and z-values.
      • (2) A point with the smallest y- and z-values and the largest x-value.
      • (3) A point with the smallest x- and z-values and the largest y-value.
      • (4) A point with the smallest x- and y-values and the largest z-value.
  • Although this embodiment has described an example in which four points are selected among the vertices of the processing unit voxel, other points can be selected as well. For example, it may be set so that the four points are selected by using the center of gravity of the processing unit voxel instead of the vertices of the processing unit voxel, but it is not limited to this. The determined representative coordinate points are output to the representative coordinate transformation unit 105.
  • The representative coordinate transformation unit 105 transforms, based on the camera parameters, the input representative coordinate points (coordinate points in the target three-dimensional space) of the processing unit voxel into coordinate points (two-dimensional coordinate points) of a camera image shot by each camera. Note that a coordinate system expressing the target three-dimensional space will be represented as a world coordinate system hereinafter. In FIG. 2, each coordinate point in the world coordinate system of the target three-dimensional space is represented by coordinate values ranging from (−2000, −2000, 0) and (2000, 2000, 3000). Assume that the aforementioned camera parameters indicate the position and the orientation of each camera in the world coordinate system. The representative coordinate transformation unit 105 calculates coordinate points of the plurality of representative coordinate points in the target three-dimensional space by matrix calculation and perspective projection transformation.
  • First, the representative coordinate transformation unit 105 transforms coordinate values (Xw, Yw, Zw) of the world coordinate system into coordinate values (Xc, Yc, Zc) of the camera coordinate system by using a rotation matrix R and a translation vector t derived from the camera parameters. The above-described equation (1) can be used for this transformation. Next, the representative coordinate transformation unit 105 transforms the three-dimensional coordinate point (Xc, Yc, Zc) of the camera coordinate system into a two-dimensional coordinate point (u, v) of a camera image by perspective projection transformation. The above-described equation (2) can be used for this transformation. The representative coordinate transformation unit 105 outputs each representative coordinate point and the information of the corresponding transformed coordinate point in the camera image to the approximate calculation unit 107.
  • The model voxel division unit 106 divides each processing unit voxel into model voxels of a three-dimensional model that has been set in advance. The model voxels are also arranged to be parallel to the x-, y-, and z-axes in the same manner as the processing unit voxels. Each processing voxel is filled with the model voxels without any gaps between the model voxels. That is, the processing voxel is divided into model voxels without any gaps. The model voxel division unit 106 outputs the voxel group information after the division to the approximate calculation unit 107. The voxel group information includes the smallest x-, y-, and z-vertex coordinates, the length of each side of the voxel, and the size of each side of each model voxel.
  • FIG. 4 shows how the processing unit voxel 201, which is a 1000 mm cube, is divided into model voxels, each of which is a 100 mm cube. Hence, one processing unit voxel is divided into a total of 1000 model voxels. Note that although all of the model voxels have not been described for the sake of descriptive convenience in FIG. 4, all of the regions of the processing unit voxels will be divided into the model voxels in practice.
  • Based on the positional relationships of the plurality of representative coordinate points changing from the positional relationships in the target three-dimensional space to the positional relationships in a camera image, the approximate calculation unit 107 performs approximate calculation of determining the coordinate points in the camera image that correspond to the internal coordinate points of one processing voxel. The approximate calculation according to this embodiment will be described hereinafter.
  • The approximate calculation unit 107 calculates, based on the representative coordinate points and the information of the corresponding coordinate points in a camera image that have been received from the representative coordinate transformation unit 105, the transformation coefficients between the movement amounts of the three-dimensional coordinate points in the world coordinate system and pixel movement amounts in a camera image. The transformation coefficients are calculated targeting every camera that is to shoot the target space. FIG. 5 is a view exemplifying the results of transforming the representative coordinate points of a processing unit voxel into coordinate points on a camera image. A method of calculating the transformation coefficient between each of the movement amounts of three-dimensional coordinate points in the world coordinate system and each of the pixel movement amounts in a camera image will be described with reference to FIG. 5 hereinafter. In this embodiment, the approximate calculation unit 107 calculates, from the transformation results of the plurality of representative coordinate points, the transformation coefficient of the movement amount from a u-coordinate to a v-coordinate in a camera image for each of the movement amounts of the x-, y-, and z-coordinates in the target three-dimensional space. The approximate calculation unit 107 uses each of the calculated transformation coefficients to calculate (to approximate) the coordinate point in the camera image that corresponds to the arbitrary coordinate point in the target three-dimensional space. This operation will be described in detail below.
  • In FIG. 5, assume that the result obtained from transforming a representative coordinate point 300 of a processing unit voxel is a transformed coordinate point 400 on the camera image. Hereinafter, assume that a representative coordinate point 301 corresponds to a transformed coordinate point 401, a representative coordinate point 302 corresponds to a transformed coordinate point 402, and a representative coordinate point 303 corresponds to a transformed coordinate point 403. In the case of the representative coordinate point 300 (X1, Y1, Z1) and the coordinate values (X2, Y1, Z1) of the representative coordinate point 301, only the values of the x-axis differ from each other, and the values of the y-axis and the z-axis are the same. The difference between the coordinate values of the transformed coordinate point 401 (ux, vx) and that of the transformed coordinate point 400 (ub, vb) is a difference generated from movement in the x-axis direction in the three-dimensional space. Hence, the relationship between the movement amount in the x-axis direction in the three-dimensional space and the coordinate points on the camera image can be calculated. Pixel movement amounts dux and dvx on the camera image for each 1 mm of the movement amount on the x-axis in the three-dimensional space are calculated by
  • dux = ( ux - ub ) ( X 2 - X 1 ) ( 3 ) dvx = ( vx - vb ) ( X 2 - X 1 ) ( 4 )
  • In the same manner, pixel movement amounts duy and dvy on the camera image for each 1 mm of the movement amount on the y-axis in the three-dimensional space are calculated from the coordinate values of the representative coordinate point 300 and the representative coordinate point 302 and the coordinate values of the transformed coordinate point 400 and the transformed coordinate point 402 by
  • duy = ( uy - ub ) ( Y 2 - Y 1 ) ( 5 ) dvy = ( vy - vb ) ( Y 2 - Y 1 ) ( 6 )
  • In the same manner, pixel movement amounts duz and dvz on the camera image for each 1 mm of the movement amount on the z-axis in the three-dimensional space are calculated from the coordinate values of the representative coordinate point 300 and the representative coordinate point 303 and the coordinate values of the transformed coordinate point 400 and the transformed coordinate point 403 by
  • duz = ( uz - ub ) ( Z 2 - Z 1 ) ( 7 ) dvz = ( vz - vb ) ( Z 2 - Z 1 ) ( 8 )
  • These calculation results are used to transform an arbitrary coordinate point 304 (X, Y, Z of the target three-dimensional space into a transformed coordinate point 404 (u, v) which is coordinate point on the camera image by

  • u=ub+(dux×(X−X1))+(duy×(Y−Y1))+(duz×(Z−Z1))  (9)

  • v=vb+(dvx×(X−X1))+(dvy×(Y−Y1))+(dvz×(Z−Z1))  (10)
  • where (X, Y, Z) is a coordinate point satisfying (X1≤X≤X2), (Y1≤X≤Y2), and (Z1≤Z≤Z2).
  • According to this method, although the calculation cost of calculating the pixel movements will increase in comparison to that of the conventional art, it is possible to perform transformation by three multiplications. By using results obtained from performing calculations using the representative coordinate points of the processing unit voxel for coordinate transformation of the plurality of model voxels present in the processing unit voxel, it is possible to reduce a cost of six multiplications and two divisions from the calculation of each coordinate point, thus resulting in a large cost reduction overall. The approximate calculation unit 107 performs the above-described calculation and transforms the coordinate values of eight vertices of each model voxel, serving as positions related to the model voxel, into coordinate points in each camera image. Subsequently, the approximate calculation unit outputs the coordinate values of each camera image to an inside/outside determination unit 108. Note that the method of transforming the coordinate points in a voxel is not limited to the above-described approximate calculation, and various kinds of modifications are applicable. That is, the transformation method can be set so that coordinate points in the camera image are derived from the coordinate points in a voxel of the target three-dimension image based on the positional relationships of the representative coordinate points of the voxel in the target three-dimensional space and the positional relationships of the representative coordinate points of the voxel in the camera image.
  • The inside/outside determination unit 108 determines whether a model voxel in each camera image is within the silhouette of the model generation target object. FIGS. 6A to 6C are views exemplifying the results from transforming the eight vertices of each model voxel to silhouette images. The inside/outside determination unit 108 compares the transformed coordinate points and extracts the largest u- and v-coordinate point and the smallest u- and v-coordinate point. A rectangular region that includes the largest u- and v-coordinate point and the smallest u- and v-coordinate point is set as the search region of a silhouette image. In FIG. 6A, the inside/outside determination unit 108 sets a search region 500 from the coordinate points related to the model voxel. The inside/outside determination unit 108 searches whether a white pixel (silhouette pixel) is present in this search region 500. If a white pixel is present, the inside/outside determination unit determines that the model voxel is inside the silhouette. If a white pixel is not present, the model voxel is determined to be outside the silhouette.
  • Note that if some of the transformed coordinate points are outside the silhouette image, as shown in FIG. 6B, the inside/outside determination unit 108 determines a search region 501 by using the largest coordinate values of the silhouette image. Also, if all of the transformed coordinate points are outside the silhouette image, as shown in FIG. 6C, the inside/outside determination unit 108 excludes the search area as a determination target since the coordinate points are outside the image capturing range of the camera. Subsequently, the inside/outside determination unit 108 counts the cameras that are determined to be inside the silhouette and the cameras that are determined to be outside the silhouette for that model voxel. The number of cameras inside the silhouette and the number of cameras outside the silhouette are output, as the determination result, to a three-dimensional model determination unit 109.
  • Note that although a method of performing inside/outside silhouette determination by using the eight vertices of each model voxel in the approximate calculation unit 107 and the inside/outside determination unit 108 have been described, the method is not limited to this. For example, the inside/outside silhouette determination may be performed by using only one model voxel coordinate point, transforming the one coordinate point into a camera image, and determining whether the coordinate point of the camera image is a white pixel. In this case, one predetermined coordinate point among the vertices of the model voxel may be used or the center of gravity of the model voxel may be used as the one predetermined coordinate point.
  • The three-dimensional model determination unit 109 determines, based on the result of inside/outside silhouette determination by the inside/outside determination unit 108, whether the target model voxel is a voxel forming the model. The three-dimensional model determination unit outputs, to a three-dimensional model generation unit 110, the coordinate values of the model voxel which has been determined to be a voxel forming the model.
  • Model voxel determination processing performed by the three-dimensional model determination unit 109 according to the first embodiment will be described below in accordance with the flowchart of FIG. 7.
  • First, the three-dimensional model determination unit 109 determines whether the total number of cameras that are determined be outside the silhouette is equal to or more than a first threshold (step S700). If it is determined that the number of cameras outside the silhouette is equal to or more than the first threshold (YES in step S700), the three-dimensional model determination unit 109 determines that the model voxel is not a voxel forming the three-dimensional model (step S701). On the other hand, if it is not determined that that the number of cameras outside the silhouette is equal to or more than the first threshold (NO in step S700), the three-dimensional model determination unit 109 determines whether the number of cameras that are determined to be inside the silhouette is equal to or more than a threshold (step S702). If the number of cameras determined to be inside the silhouette is less than the threshold (NO in step S702), the three-dimensional model determination unit 109 determines that the model voxel is not a voxel forming the three-dimensional model (step S701). If the number of cameras determined to be inside the silhouette is equal to or more than the threshold (YES in step S701), the three-dimensional model determination unit 109 determines that the model voxel is a voxel forming the three-dimensional model (step S703).
  • In this embodiment, one is set as the first threshold for the number of cameras determined to be outside the silhouette in step S700, and two is set as the second threshold for the number of cameras determined to be inside the silhouette in step S702. However, the first and second thresholds are not limited those described above. For example, the thresholds may be set in accordance with the shooting environment and the number of cameras. Also in the determination as to whether a model voxel is a voxel forming the three-dimensional model, it may be set so that only the number of cameras determined to be inside the silhouette will be used or it may be set so that only the number of cameras determined to be outside the silhouette will be used.
  • The three-dimensional model generation unit 110 outputs the coordinate values of the voxel that has been determined by the three-dimensional model determination unit 109 to be a model voxel forming the model. In this embodiment, assume that an aggregate of points is used as the output format and that the coordinate values of each point indicate the center-of-gravity coordinate point of a model voxel. Also, assume that the coordinate representation expresses the numerical values of the x-, y-, and z-coordinates in mm. Note that although the center of gravity of a model voxel has been used as the coordinate values of the model voxel to be output, but it is not limited to this. For example, one of the vertices of the model voxel may be used. Also, the unit of coordinate representation is not limited to mm. Furthermore, although an aggregate of points has been assumed as an output format here, a format such as a mesh model or a polygon may also be used.
  • The three-dimensional model generation processing performed by the three-dimensional model generation apparatus 10 will be described below in accordance with the flowcharts of FIGS. 8A and 8B.
  • First, the target space setting unit 100 sets the target three-dimensional space for model generation (step S800). The processing unit voxel division unit 103 divides the set target three-dimensional space into processing unit voxels and inputs the processing unit voxels to the representative coordinate determination unit 104 and the model voxel division unit 106 (step S801). The processes of steps S802 to S808 are executed for each processing voxel hereinafter.
  • The representative coordinate determination unit 104 determines the representative coordinate points of each processing voxel and outputs the determined representative coordinate points to the representative coordinate transformation unit 105 (step S802). The representative coordinate transformation unit 105 transforms, calculating the equations (1) and (2), the representative coordinate points into the coordinate points of each camera image and outputs the representative coordinate points and the information of the transformed coordinate points in each camera image to the approximate calculation unit 107 (step S803). The approximate calculation unit 107 calculates, by calculating the equations (3) to (8), the transformation coefficients between the movement amounts of the three-dimensional coordinate points in the world coordinate system and the pixel movement amounts in each camera image from the representative coordinate points and the information of the transformed coordinate points in each camera image (step S804). The model voxel division unit 106 divides the processing unit voxel into model voxels and outputs the information of the divided model voxels to the approximate calculation unit 107 (step S805).
  • The processes of steps S806 to S808 are executed for each model voxel, for all of the model voxels. Also, the processes of steps S806 and step S807 of the above processes are performed for each camera, for all of the cameras. The approximate calculation unit 107 calculates, of the coordinate points of the model voxel, each transformed coordinate point on a camera image by the approximate calculation indicated in (9) and (10) (step S806). The inside/outside determination unit 108 performs, for each camera, inside/outside silhouette determination by searching for a search region determined from the coordinate points on the camera image after the transformation. The inside/outside determination unit outputs the number of cameras that are determined to be inside the silhouette and the number of cameras that are determined to be outside the silhouette to the three-dimensional model determination unit 109 (step S807). The three-dimensional model determination unit 109 determines, based on the result from the inside/outside silhouette determination of the model voxel, whether the model voxel is a model voxel that forms the three-dimensional model (step S808). The three-dimensional model generation unit 110 outputs the model voxel, which has been determined to form the model and serves as an aggregate of points, as the three-dimensional model (step S809).
  • As described above, according to the first embodiment, when the three-dimensional coordinate points of each model voxel are to be transformed into pixel coordinate points on a silhouette image of each camera, only the representative coordinate points of a processing unit voxel undergo projective transformation, and the other coordinate points are obtained by approximate calculation. As a result, the calculation load can be greatly reduced. According to the first embodiment, for example, in the volume intersection method, when the three-dimensional coordinate points are transformed into coordinate positions on a camera image, only the representative coordinate points undergo projective transformation and the coordinate positions on the camera image are obtained by approximate calculation for each remaining three-dimensional coordinate point. Therefore, it is possible to reduce the calculation load when constructing a three-dimensional model by the volume intersection method.
  • Second Embodiment
  • When the approximate calculation method described in the first embodiment is used, the calculation error becomes large compared to that of the matrix calculation as shown by the equations (1) and (2). The influence of this calculation error on the accuracy of the three-dimensional model to be generated can increase depending on the size of each processing unit voxel and the size of each model voxel. The larger the size of the processing unit voxel is with respect to the model voxel, the larger the calculation error is, thereby increasing the influence of the calculation error on the accuracy of the three-dimensional model. On the other hand, the smaller the processing unit voxel is with respect to the model voxel, the fewer the number of times in which the transformation coefficients calculated from the transformation results of the representative coordinate points can be applied to the approximate calculation is, thereby reducing the effect of the increased processing speed. In some cases, the load of calculating the transformation coefficients may become heavier, and this may slow down the processing speed compared to that of a case in which normal matrix calculation is performed.
  • In the second embodiment, whether to execute coordinate transformation by approximate calculation or by matrix calculation is determined, in consideration of the influence from a calculation error and the effect of the increased processing speed, in accordance with the size of each processing unit voxel and each model voxel which is ultimately output. That is, according to the second embodiment, whether to perform coordinate transformation by approximate calculation or matrix calculation is suitably switched in accordance with the size of a voxel to be processed.
  • FIG. 9 is a block diagram showing an example of the functional arrangement of a three-dimensional model generation apparatus 10 according to the second embodiment. Blocks having the same functions as those of the first embodiment (FIG. 1A) are denoted by the same reference numerals.
  • In FIG. 9, a method selection unit 111 obtains the information of a processing unit voxel from the processing unit voxel division unit 103 and the information of a model voxel from the model voxel division unit 106. The method selection unit 111 selects, in accordance with the ratio between the size of the processing unit voxel and the size of the model voxel, to execute coordinate transformation by approximate calculation or by matrix calculation which allows an accurate transformation result to be obtained. The details of the selection method will be described later.
  • If the method selection unit 111 selects to execute approximate calculation, the information of the processing unit voxel is output to a representative coordinate determination unit 104 and the information of the model voxel is output to the approximate calculation unit 107. On the other hand, if the method selection unit 111 selects to execute matrix calculation, the information of the model voxel is output to a matrix calculation unit 112.
  • The matrix calculation unit 112 obtains the camera parameters of each camera from a camera parameter input unit 101, obtains the information of the model voxel from the method selection unit 111, and transforms, by calculating the equations (1) and (2), the coordinate points of the respective eight vertices of the model voxel into coordinate points on each camera image. That is, the matrix calculation unit 112 executes matrix calculation of transforming the coordinate points of a target three-dimensional space into coordinate points on a camera image by matrix transformation and perspective projection transformation. The matrix calculation unit 112 outputs the converted coordinate values to an inside/outside determination unit 108. After coordinate transformation, in the same manner as the first embodiment, inside/outside silhouette determination by the inside/outside determination unit 108 and model voxel determination by a three-dimensional model determination unit 109 are performed, and the three-dimensional model is generated by a three-dimensional model generation unit 110.
  • Next, the selection of a coordinate transformation method (approximate calculation or matrix calculation) by the method selection unit 111 will be described. The method selection unit 111 according to this embodiment selects one of the approximate calculation and the matrix calculation based on the relationship between the size of the processing unit voxel and the size of the model voxel. More specifically, if the ratio between the size of the processing unit voxel and the size of the model voxel is within an upper threshold and a lower threshold that are preset, the approximate calculation is selected. If the ratio between the size of the processing unit voxel and the size of the model voxel is larger than the upper threshold or smaller than the lower threshold, the method selection unit 111 selects the matrix calculation.
  • As described above, if the difference between the sizes of the voxels is large, the calculation error will be large. Hence, since the calculation error will be large in a case in which the aforementioned ratio exceeds the upper threshold, the matrix calculation is selected. If the difference between the sizes of the voxels is small, the effect of increased processing speed by the approximate calculation will be small, and this may, on the contrary, increase the load in some cases. Hence, even in a case in which the aforementioned ratio is lower than the lower threshold, the matrix calculation is selected.
  • Note that although the second embodiment showed a method in which an upper threshold and a lower threshold are used, it may be arranged so that only one of the upper threshold and the lower threshold is set as a threshold, and the calculation method may be selected based on whether the ratio exceeds this threshold. Also, the calculation method may be selected based on whether the size of the processing unit voxel or the size of the model voxel is larger than a predetermined size. It can be set so that, for example, with respect to the size of an object forming the three-dimensional model which is to be the target, approximate calculation is used in a case in which the processing unit voxel is equal to or smaller than a predetermined size, for example, ¼ of the size of the object, and so that the matrix calculation is used of other cases. However, the ratio in this case is not limited to that described above.
  • As described above, according to the second embodiment, in a case in which the calculation error by approximate calculation is small or in a case in which the effect of increased processing speed is large, coordinate transformation can be performed by approximate calculation. That is, the approximate calculation and the matrix calculation can be used suitably, and it is possible to generate a three-dimensional model with high accuracy and high speed.
  • Third Embodiment
  • In the first embodiment, a transformation coefficient is determined based on the assumption that the movement amount in each axis direction in a world coordinate system is equally reflected on the movement amount in each axis direction in a camera image. In the third embodiment, a higher-accuracy approximate calculation is performed by considering the deformation of an object region when a target three-dimensional space is projected on a camera image. More specifically, the change in the length of each line segment in a camera image when a predetermined line segment is moved parallel between representative coordinate points in the target three-dimensional space is added to each transformation coefficient of the first embodiment. Note that the arrangement of a three-dimensional model generation apparatus 10 according to the third embodiment is the same as that of the first embodiment (FIG. 1A). However, the operations of a representative coordinate determination unit 104 and an approximate calculation unit 107 are different from those of the first embodiment.
  • FIG. 10 is a view exemplifying the results from transforming representative coordinate points of a processing unit voxel into coordinate points on a camera image according to the third embodiment. In the first embodiment, the representative coordinate determination unit 104 determined four points, among the vertices of the processing unit voxel, as the representative coordinate points. In the third embodiment, the representative coordinate determination unit 104 determines, among the vertices of the processing unit voxel, the following six points as the representative coordinate points.
  • (1) A point with the smallest x-, y-, and z-values (a representative coordinate point 600 and a transformed coordinate point 700 in FIG. 10).
  • (2) A point with the smallest y- and z-values and the largest x-value (a representative coordinate point 601 and a transformed coordinate point 701 in FIG. 10).
  • (3) A point with the smallest x- and z-values and the largest y-value (a representative coordinate point 602 and a transformed coordinate point 702 in FIG. 10).
  • (4) A point with the smallest x- and y-values and the largest z-value (a representative coordinate point 603 and a transformed coordinate point 703 in FIG. 10).
  • (5) A point with the smallest y-value and the largest x- and z-values (a representative coordinate point 604 and a transformed coordinate point 704 in FIG. 10).
  • (6) A point with the smallest z-value and the largest x- and y-values (a representative coordinate point 605 and a transformed coordinate point 705 in FIG. 10).
  • Here, although six points have been selected from the vertices of the processing unit voxel, it is not limited to this. It is sufficient to determine the representative coordinate points so as to include a group of four coordinate points that have the same x-coordinate, a group of four coordinate points that have the same y-coordinate, and a group of four coordinate points that have the same z-coordinate. Additionally, the representative coordinate points are not limited to the coordinate points of the vertices of the processing unit voxel.
  • FIGS. 11A and 11B are views each showing coordinate points on a camera image used for approximate calculation and distances between the respective coordinate points. The approximate calculation unit 107 calculates, from the transformation results of the representative coordinate points, distances Lx1, Lx2, Ly1, Ly2, Lz1, and Lz2 between the coordinate points shown in FIGS. 11A and 11B. The approximate calculation unit 107 transforms an arbitrary coordinate point of the three-dimensional coordinate points into a coordinate point of the camera image by using the following calculations.
  • In FIG. 11A, assume that the distance between the transformed coordinate point 700 and the transformed coordinate point 702 is Lx1 and that the distance between the transformed coordinate point 701 and the transformed coordinate point 705 is Lx2. By using the distance between these coordinate points, movement amounts dux and dvx on the camera image when moving from the representative coordinate point 600 of the processing unit voxel for a distance dx in the x-axis direction in the three-dimensional space are calculated by
  • dux = ( ux - ub ) × dx × Lx 1 ( Lx 1 × dx ) + ( Lx 2 × ( X 2 - X 1 - dx ) ) ( 11 ) dvx = ( vx - vb ) × dx × Lx 1 ( Lx 1 × dx ) + ( Lx 2 × ( X 2 - X 1 - dx ) ) ( 12 )
  • In addition, in FIG. 11A, assume that the distance between the transformed coordinate point 700 and the transformed coordinate point 701 is Ly1 and that the distance between the transformed coordinate point 702 and the transformed coordinate point 705 is Ly2. Movement amounts duy and dvy on the camera image when moving from the representative coordinate point 600 of the processing unit voxel for a distance dy in the y-axis direction in the three-dimensional space are calculated by
  • duy = ( uy - ub ) × dy × Ly 1 ( Ly 1 × dy ) + ( Ly 2 × ( Y 2 - Y 1 - dy ) ) ( 13 ) dvy = ( vy - vb ) × dy × Ly 1 ( Ly 1 × dy ) + ( Ly 2 × ( Y 2 - Y 1 - dy ) ) ( 14 )
  • Furthermore, in FIG. 11B, assume that the distance between the transformed coordinate point 700 and the transformed coordinate point 701 is Lz1 and that the distance between the transformed coordinate point 703 and the transformed coordinate point 704 is Lz2. Movement amounts duz and dvz on the camera image when moving from the representative coordinate point 600 of the processing unit voxel for a distance dz in the z-axis direction in the three-dimensional space are calculated by
  • duz = ( uz - ub ) × dz × Lz 1 ( Lz 1 × dz ) + ( Lz 2 × ( Z 2 - Z 1 - dz ) ) ( 15 ) dvz = ( vz - vb ) × dz × Lz 1 ( Lz 1 × dz ) + ( Lz 2 × ( Z 2 - Z 1 - dz ) ) ( 16 )
  • The results of these calculations can be used to transform an arbitrary coordinate point 304 (X, Y, Z) of the target three-dimensional space into a coordinate point (u, v) on the camera image by

  • u=ub+dux+duy+duz  (17)

  • v=vb+dvx+dvy+dvz  (18)
  • where X=X1+dx, Y=Y1+dy, and Z=Z1+dz.
  • As described above, according to the third embodiment, compared to the first embodiment, since approximate calculation of coordinate transformation is performed by adding the degree of deformation after the transformation of the three-dimensional coordinate points, a higher-accuracy coordinate transformation can be performed. That is, compared to the first embodiment, the quality of the three-dimensional model to be generated improves.
  • Note that the second embodiment can be applied so that whether to use the approximate calculation or the matrix calculation of the third embodiment is selected based on the size of the processing unit voxel and the size of the model voxel. Also, in each of the above-described embodiments, after processing in a model voxel is completed, it is possible to set the model voxel as a processing unit voxel, further divide the model voxel, and repetitively perform processing. According to the above described embodiments, it is possible to reduce the calculation load at the time of three-dimensional model generation.
  • Although the embodiments of the present invention have been described above, the present invention is not limited to the above-described embodiments, and various modifications and changes can be made within the spirit and scope of the present invention described in the appended claims. For example, the approximate calculation unit 107 can calculate a two-dimensional coordinate point of a non-representative coordinate point by performing linear interpolation to the two-dimensional coordinate point on a camera image of the representative coordinate point. This allows a three-dimensional model to be generated with a lighter load. It may also be set so that the user can designate a specific method among the plurality of approximate calculation methods described in the aforementioned embodiments. As a result, it will be possible to implement three-dimensional model generation suited to the desired three-dimensional model accuracy and the calculation resource amount.
  • Other Embodiments
  • Embodiment(s) of the present invention can also be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium (which may also be referred to more fully as a ‘non-transitory computer-readable storage medium’) to perform the functions of one or more of the above-described embodiment(s) and/or that includes one or more circuits (e.g., application specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiment(s), and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiment(s) and/or controlling the one or more circuits to perform the functions of one or more of the above-described embodiment(s). The computer may comprise one or more processors (e.g., central processing unit (CPU), micro processing unit (MPU)) and may include a network of separate computers or separate processors to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a network or the storage medium. The storage medium may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like.
  • While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
  • This application claims the benefit of Japanese Patent Application No. 2017-028423, filed Feb. 17, 2017 which is hereby incorporated by reference herein in its entirety.

Claims (16)

What is claimed is:
1. An information processing apparatus that generates a three-dimensional model based on a plurality of camera images obtained using a plurality of cameras, comprising:
a transformation unit configured to transform, into two-dimensional coordinate points on a camera image, a plurality of representative coordinate points specified from one processing unit voxel of a plurality of processing unit voxels that are obtained by dividing a target three-dimensional space serving as a target of three-dimensional model generation;
a determination unit configured to determine, by using transformation results of the plurality of representative coordinate points by the transformation unit, a coordinate point on the camera image corresponding to an internal coordinate point of the one processing unit voxel; and
a generation unit configured to generate the three-dimensional model based on the coordinate point on the camera image corresponding to the internal coordinate point of the one processing unit voxel determined by the determination unit.
2. The apparatus according to claim 1, wherein the determination unit determines, based on positional relationships of the plurality of representative coordinate points in the target three-dimensional space and positional relationships of the plurality of representative coordinate points in the camera image, the coordinate point on the camera image corresponding to the internal coordinate point of the one processing unit voxel.
3. The apparatus according to claim 1, wherein the determination unit determines, by linear interpolation of the transformation results of the plurality of representative coordinate points, the coordinate point on the camera image corresponding to the internal coordinate point of the one processing unit pixel.
4. The apparatus according to claim 1, comprising:
an input unit configured to input a silhouette image to identify an object region forming the three-dimensional model and a region other than the object region; and
a determination unit configured to determine, based on the silhouette image and a coordinate point, of a position related to a model voxel obtained by dividing the one processing unit voxel, on the camera image determined by the determination unit, whether the model voxel forms the object region of the three-dimensional model,
wherein the generation unit generates the three-dimensional model based on a determination result of the determination unit.
5. The apparatus according to claim 1, wherein the transformation unit transforms, by matrix transformation and perspective projection transformation, a coordinate point of the plurality of representative coordinate points on the target three-dimensional space into the coordinate point on the camera image.
6. The apparatus according to claim 1, wherein the determination unit has a function of executing matrix calculation of transforming a coordinate point of the target three-dimensional space into the coordinate point on the camera image by matrix transformation and perspective projection transformation,
and the information processing apparatus comprises
a selection unit configured to select one of approximate calculation and the matrix calculation based on the relationship between the size of the one processing unit voxel and the size of a model voxel obtained by dividing the one processing unit voxel.
7. The apparatus according to claim 6, wherein the selection unit
selects the matrix calculation in a case in which a ratio of the size of the model voxel to the size of the one processing unit voxel exceeds an upper threshold and in a case in which the ratio is lower than a lower threshold which is smaller than the upper threshold, and
selects the approximate calculation in other cases.
8. The apparatus according to claim 1, wherein the determination unit calculates a transformation coefficient that is used to transform each of movement amounts of an x-coordinate, y-coordinate, and a z-coordinate in the target three-dimensional space into movement amounts of a u-coordinate and a v-coordinate on the camera image, and the determination unit calculates a coordinate point on the camera image corresponding to an arbitrary coordinate point on the target three-dimensional space by using the transformation coefficient.
9. The apparatus according to claim 8, wherein the plurality of representative coordinate points include a pair of coordinate points in which only the x-coordinates differ, a pair of coordinate points in which only the y-coordinates differ, and a pair of coordinate points in which only the z-coordinates differ.
10. The apparatus according to claim 9, wherein the plurality of representative coordinate points are formed from four coordinate points.
11. The apparatus according to claim 8, wherein the determination unit adds, when a line segment of a predetermined length is moved parallel between the representative coordinate points in the target three-dimensional space, a change in the length of the line segment on the camera image to the transformation coefficient.
12. The apparatus according to claim 11, wherein the plurality of representative coordinate points include a group of four coordinate points that have the same x-coordinate, a group of four coordinate points that have the same y-coordinate, and a group of four coordinate points that have the same z-coordinate.
13. The apparatus according to claim 12, wherein the plurality of representative coordinate points are formed from six coordinate points.
14. The apparatus according to claim 1, wherein the plurality of representative coordinate points are formed from coordinate points of a plurality of vertices of the one processing unit voxel.
15. A generation method of generating a three-dimensional model based on a plurality of camera images obtained using a plurality of cameras, the method comprising:
transforming, into two-dimensional coordinate points on a camera image, a plurality of representative coordinate points specified from one processing unit voxel of a plurality of processing voxels that are obtained by dividing a target three-dimensional space serving as a target of three-dimensional model generation;
determining, by using transformation results of the plurality of representative coordinate points, a coordinate point on the camera image corresponding to an internal coordinate point of the one processing unit voxel; and
generating the three-dimensional model based on the coordinate point on the camera image corresponding to the internal coordinate point of the one processing unit voxel that have been determined.
16. A non-transitory computer-readable storage medium storing a program for causing a computer to execute a generation method of generating a three-dimensional model based on a plurality of camera images obtained using a plurality of cameras, the method comprising:
transforming, into two-dimensional coordinate points on a camera image, a plurality of representative coordinate points specified from one processing unit voxel of a plurality of processing voxels that are obtained by dividing a target three-dimensional space serving as a target of three-dimensional model generation;
determining, by using transformation results of the plurality of representative coordinate points, a coordinate point on the camera image corresponding to an internal coordinate point of the one processing unit voxel; and
generating the three-dimensional model based on the coordinate point on the camera image corresponding to the internal coordinate point of the one processing unit voxel that have been determined.
US15/891,530 2017-02-17 2018-02-08 Information processing apparatus and method of generating three-dimensional model Expired - Fee Related US10719975B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2017028423A JP6902881B2 (en) 2017-02-17 2017-02-17 Information processing device and 3D model generation method
JP2017-028423 2017-02-17

Publications (2)

Publication Number Publication Date
US20180240264A1 true US20180240264A1 (en) 2018-08-23
US10719975B2 US10719975B2 (en) 2020-07-21

Family

ID=63167921

Family Applications (1)

Application Number Title Priority Date Filing Date
US15/891,530 Expired - Fee Related US10719975B2 (en) 2017-02-17 2018-02-08 Information processing apparatus and method of generating three-dimensional model

Country Status (2)

Country Link
US (1) US10719975B2 (en)
JP (1) JP6902881B2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112288853A (en) * 2020-10-29 2021-01-29 字节跳动有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device, and storage medium
US20210166038A1 (en) * 2019-10-25 2021-06-03 7-Eleven, Inc. Draw wire encoder based homography
WO2023077650A1 (en) * 2021-11-02 2023-05-11 北京鸿合爱学教育科技有限公司 Three-color image generation method and related device

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021138677A1 (en) * 2020-01-05 2021-07-08 Magik Eye Inc. Transferring the coordinate system of a three-dimensional camera to the incident point of a two-dimensional camera
JP7393092B2 (en) * 2020-08-26 2023-12-06 Kddi株式会社 Virtual viewpoint image generation device, method and program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060066614A1 (en) * 2004-09-28 2006-03-30 Oliver Grau Method and system for providing a volumetric representation of a three-dimensional object
US20090160858A1 (en) * 2007-12-21 2009-06-25 Industrial Technology Research Institute Method for reconstructing three dimensional model
US20100156901A1 (en) * 2008-12-22 2010-06-24 Electronics And Telecommunications Research Institute Method and apparatus for reconstructing 3d model
US20160343166A1 (en) * 2013-12-24 2016-11-24 Teamlab Inc. Image-capturing system for combining subject and three-dimensional virtual space in real time

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5781194A (en) * 1996-08-29 1998-07-14 Animatek International, Inc. Real-time projection of voxel-based object
JP5093053B2 (en) * 2008-10-31 2012-12-05 カシオ計算機株式会社 Electronic camera
JP5736285B2 (en) * 2011-09-21 2015-06-17 Kddi株式会社 Apparatus, method, and program for restoring three-dimensional shape of object
JP5987650B2 (en) * 2012-11-14 2016-09-07 コニカミノルタ株式会社 Color conversion device
JP2015126301A (en) * 2013-12-25 2015-07-06 株式会社沖データ Image processor and image processing method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060066614A1 (en) * 2004-09-28 2006-03-30 Oliver Grau Method and system for providing a volumetric representation of a three-dimensional object
US20090160858A1 (en) * 2007-12-21 2009-06-25 Industrial Technology Research Institute Method for reconstructing three dimensional model
US20100156901A1 (en) * 2008-12-22 2010-06-24 Electronics And Telecommunications Research Institute Method and apparatus for reconstructing 3d model
US20160343166A1 (en) * 2013-12-24 2016-11-24 Teamlab Inc. Image-capturing system for combining subject and three-dimensional virtual space in real time

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Nasrin et al. (Roshnara Nasrin, et al., "Efficient 3D visual hull recognition based on marching cube algorithm", IEEE. Sponsored 2nd International Conference on Innovations in Information Embedded and Communication Systems, ICIIECS’15, Conf. Date 19-20 March 2015, 6 pages) (Year: 2015) *
Shamos, Michael I., "Computational Geometry", Yale University (Year: 1978) *
Waechter, et al. (Irina Waechter, et al., "Using flow information to support 3D vessel reconstruction from rotational angiography", Medical Physics 35.7: 3302-3316. John Wiley and Sons Ltd. (Year: 2008) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210166038A1 (en) * 2019-10-25 2021-06-03 7-Eleven, Inc. Draw wire encoder based homography
US11721029B2 (en) * 2019-10-25 2023-08-08 7-Eleven, Inc. Draw wire encoder based homography
CN112288853A (en) * 2020-10-29 2021-01-29 字节跳动有限公司 Three-dimensional reconstruction method, three-dimensional reconstruction device, and storage medium
WO2023077650A1 (en) * 2021-11-02 2023-05-11 北京鸿合爱学教育科技有限公司 Three-color image generation method and related device

Also Published As

Publication number Publication date
US10719975B2 (en) 2020-07-21
JP6902881B2 (en) 2021-07-14
JP2018133059A (en) 2018-08-23

Similar Documents

Publication Publication Date Title
US10719975B2 (en) Information processing apparatus and method of generating three-dimensional model
US11037325B2 (en) Information processing apparatus and method of controlling the same
US10304161B2 (en) Image processing apparatus, control method, and recording medium
RU2735382C2 (en) Image processing device, image processing device control method and computer-readable data storage medium
US10311595B2 (en) Image processing device and its control method, imaging apparatus, and storage medium
US10559095B2 (en) Image processing apparatus, image processing method, and medium
EP3629302B1 (en) Information processing apparatus, information processing method, and storage medium
TWI716874B (en) Image processing apparatus, image processing method, and image processing program
US10573073B2 (en) Information processing apparatus, information processing method, and storage medium
CN108629799B (en) Method and equipment for realizing augmented reality
CN116051736A (en) Three-dimensional reconstruction method, device, edge equipment and storage medium
JP2022516298A (en) How to reconstruct an object in 3D
US10949988B2 (en) Information processing apparatus, information processing method, and program
US11087482B2 (en) Apparatus that generates three-dimensional shape data, method and storage medium
US11037323B2 (en) Image processing apparatus, image processing method and storage medium
EP4336448A2 (en) Image processing apparatus, image processing method, program, and storage medium
JP7487266B2 (en) Image processing device, image processing method, and program
JP7504614B2 (en) Image processing device, image processing method, and program
US11869146B2 (en) Three-dimensional model generation method and three-dimensional model generation device
CN117237500A (en) Three-dimensional model display visual angle adjusting method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: CANON KABUSHIKI KAISHA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ITO, HIRONAO;REEL/FRAME:046083/0054

Effective date: 20180205

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: ADVISORY ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20240721