WO2012101962A1 - 姿勢状態推定装置および姿勢状態推定方法 - Google Patents
姿勢状態推定装置および姿勢状態推定方法 Download PDFInfo
- Publication number
- WO2012101962A1 WO2012101962A1 PCT/JP2012/000090 JP2012000090W WO2012101962A1 WO 2012101962 A1 WO2012101962 A1 WO 2012101962A1 JP 2012000090 W JP2012000090 W JP 2012000090W WO 2012101962 A1 WO2012101962 A1 WO 2012101962A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- candidate
- region
- posture state
- extracted
- estimated
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
- G06T7/75—Determining position or orientation of objects or cameras using feature-based methods involving models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Definitions
- the present invention relates to a posture state estimation device and a posture state estimation method for estimating a posture state of an object based on image data obtained by photographing an object having a plurality of regions connected by joints.
- Behavior analysis includes, for example, abnormal behavior detection on the street, purchase behavior analysis in a store, work efficiency improvement support in a factory, and form guidance in sports.
- FIG. 1 the technique which estimates a person's posture state based on the image data image
- FIG. 1 the technique which estimates a person's posture state based on the image data image
- FIG. 1 the technique which estimates a person's posture state based on the image data image
- FIG. In the technique described in Patent Document 1 (hereinafter referred to as “the prior art”), first, a site candidate is extracted based on an elliptical shape or a parallel line included in a captured image. Next, the prior art calculates a part likelihood and a part relation likelihood using a likelihood function statistically obtained from a plurality of sample images. Then, the prior art calculates an optimum combination of site candidates based on these likelihoods.
- the prior art has a problem that it can not be estimated with high accuracy depending on the posture state. This is because, if there is a portion that is behind other portions, the shape of the image is not elliptical, or one of the two edges is not acquired, etc. It is because it can not extract. For example, it is assumed that the upper right arm of a person facing the left side is a shade of the upper left arm located on the near side. In this case, in the prior art, it is not possible to extract a candidate for the upper right arm, and as a result, for example, the posture state in which the upper right arm is hidden in the upper left arm and the posture state in which the upper right arm is hidden in the trunk are distinguished. Can not do it.
- An object of the present invention is to provide a posture state estimation device and a posture state estimation method capable of accurately estimating the posture state of an object having a joint.
- the posture state estimation device is a posture state estimation device that estimates a posture state of the object based on image data obtained by photographing an object having a plurality of regions connected by joints,
- the portion candidate extraction unit extracts a portion candidate extraction unit that extracts a portion candidate of the portion, and a portion of the unextracted portion for which the portion candidate is not extracted by the portion candidate extraction unit Based on the complemented part candidate extraction unit that extracts the part candidate of the unextracted part from the image data, assuming that it is a shadow of the already extracted part, and based on the extracted part candidate, And a posture state estimation unit that estimates the posture state.
- the posture state estimation method is a posture state estimation method for estimating a posture state of the object based on image data obtained by photographing an object having a plurality of regions connected by joints.
- a step of extracting a part candidate of the part, and a part of the unextracted part from which the part candidate is not extracted by the part candidate extraction unit is an extracted part from which the part candidate is extracted by the part candidate extraction unit Extracting the candidate region of the unextracted region from the image data, and estimating the posture state of the object based on the extracted region candidate And.
- the posture state of an object having a joint can be estimated with high accuracy.
- Block diagram showing an example of the configuration of a complementary part candidate extraction unit according to the present embodiment A flowchart showing an example of the operation of the posture state estimation device according to the present embodiment Diagram for explaining the omega shape in the present embodiment Diagram for explaining the vertical distance from the reference line to the omega shape in the present embodiment
- a diagram showing an example of a distance histogram in the present embodiment A diagram showing an example of a binarized distance histogram in the present embodiment Diagram for explaining various parameters indicating the reference part in the present embodiment
- a diagram showing an example of the contents of the reference portion correspondence table in the present embodiment A diagram showing an example of the content of the part / region correspondence table in the present embodiment
- a diagram showing an example of contents of part region data in the present embodiment A diagram showing an example of an estimated likelihood map in the present embodiment
- a diagram showing an example of a target image in the present embodiment The figure which shows an example of the extraction result of the edge in
- site refers to a group of parts of the human body divided by joints. That is, the sites are, for example, the head, shoulders, upper right arm, right forearm, upper left arm, left forearm, right upper knee, right lower knee, left upper knee, and left lower knee.
- site region is a region that can be occupied by a site in the image, that is, the movable range of the site.
- the "part axis" is a virtual central axis in the longitudinal direction of the part.
- the part axis means the second joint connecting the part to the other first part on the reference part side and the end of the second joint or part connecting the part to the other second part. It refers to a line segment connecting with a department.
- the part axis may be defined, for example, by a combination of coordinate information of the first joint, angle information, and part length, or coordinate information of the first joint and the end of the second joint or part. It may be defined by coordinate information.
- the position, orientation, and length of the part axis of the right upper arm substantially match the position, orientation, and length of the central axis of the right upper arm bone.
- parts thickness is the thickness around the part axis of a part.
- the "part candidate” is a candidate for the position of the part, and is the position of the part estimated from the image data.
- posture state refers to the type of combination of the posture (position and / or angle) of two or more parts to be focused on; for example, “right arm is bent", “upright state” Etc.
- the "posture” is represented by information such as the position of the joint connecting the parts in the two-dimensional coordinate system or the three-dimensional coordinate system, or the length of each related part and the angle between the parts. I assume. Therefore, “posture state estimation” means estimating a posture state by estimating these pieces of information.
- the above-mentioned position, length, and angle may be expressed by relative values based on a predetermined body part of a person, or may be expressed by absolute values in a two-dimensional coordinate system or a three-dimensional coordinate system. .
- posture state estimation apparatus 100 treats a group of a plurality of pixels corresponding to a predetermined size as one pixel and performs the same processing. Also good. Thereby, the posture state estimation device can perform processing at high speed.
- the value of the pixel serving as the center of gravity of the plurality of pixels may be used as the value of the plurality of pixels, or the average value of the values of the plurality of pixels is used for the plurality of pixels. It may be used as a value.
- FIG. 1 is a block diagram showing a configuration of a posture state estimation apparatus according to an embodiment of the present invention. For simplicity of explanation, peripheral devices of the posture state estimation device are also illustrated.
- the posture state estimation device 100 includes a body restriction information storage unit 110, an image data acquisition unit 120, a part region estimation unit 130, a part candidate extraction unit 140, a part candidate determination unit 150, a complement part candidate extraction unit 160, and A posture state estimation unit 170 is provided.
- the body restriction information storage unit 110 stores in advance restriction conditions (hereinafter referred to as “body restriction information”) regarding the physical constitution and posture of a person.
- the body restriction information is information used for estimation of a part region described later and extraction of a part candidate.
- the specific content of the body constraint information differs depending on the estimation method of the part region and the extraction method of the part candidate, and therefore will be described later.
- the image data acquisition unit 120 acquires image data of an image captured by the single-eye camera 200 installed in a predetermined three-dimensional coordinate space by wired communication or wireless communication, and outputs the image data to the part region estimation unit 130.
- the monocular camera 200 is assumed to be a video camera.
- the image data acquisition unit 120 inputs moving image data continuously captured in real time by the single-eye camera 200, and sequentially outputs each still image data forming the moving image data to the part region estimation unit 130.
- the image data is described as including only one image. However, the image data may include images of a plurality of people or may not include human images.
- FIG. 2 is a diagram for explaining image data.
- a three-dimensional coordinate system 410 having an origin O as a position where the position of the single-eye camera 200 is projected on the ground is set.
- the vertical direction is the Y axis
- the Y axis and the direction orthogonal to the optical axis 411 of the single-eye camera 200 are the X axis
- the direction orthogonal to the X axis and the Y axis is the Z axis.
- the installation angle of the monocular camera 200 is represented by, for example, an angle ⁇ between the Y axis and the optical axis 411. Then, the single-eye camera 200 performs focusing by focusing on a certain plane 412 included in the range of the angle of view ⁇ of the single-eye camera 200. Image data of the image captured in this manner is transmitted to the posture state estimation device 100.
- the part region estimation unit 130 in FIG. 1 estimates a part region of each part based on the image data input from the image data acquisition unit 120. Specifically, part region estimation section 130 estimates the position and orientation of a reference part of a person from image data.
- the “reference part” is a part where its position and orientation are estimated prior to other parts, and the estimation result influences the estimation of the position and orientation of other parts. In the image acquisition space, it is desirable to set a site where stable video can be obtained. Then, part region estimation section 130 estimates a part region of each part based on the position and orientation of the estimated reference part.
- part region estimation unit 130 outputs the image data and information indicating a part region for each part (hereinafter referred to as “part region data”) to part candidate extraction unit 140.
- part region data the image data and information indicating a part region for each part.
- the part candidate extraction unit 140 extracts each part candidate from the input image data, based on the input part region data. Then, the part candidate extraction unit 140 outputs the image data and information indicating the extracted part candidate (hereinafter referred to as “part candidate information”) to the part candidate determination unit 150.
- the part candidate is represented by the position on the image, that is, the two-dimensional coordinate system of the image, and the part candidate information indicates the distribution of the likelihood that each part is located. It is assumed that it is a likelihood map.
- region candidate extraction unit 140 is configured such that a designated region corresponding to the region is located in regions other than the region indicated by the region data input from region region estimation unit 130. Generate a likelihood map with reduced likelihood.
- the likelihood map generated based on the image data is referred to as “estimated likelihood map”.
- the part candidate determination unit 150 determines which part is an already extracted part and which part is an undetected part among the parts to be used for the posture state estimation.
- the “already extracted part” is a part from which a part candidate is extracted by the part candidate extraction unit 140.
- the “unextracted part” is a part from which the part candidate is not extracted by the part candidate extraction unit 140.
- the part candidate determination unit 150 outputs the already extracted part identifier indicating the already extracted part and the unextracted part identifier indicating the unextracted part to the complementary part candidate extraction unit 160 together with the image data and the part candidate information.
- the complementation part candidate extraction unit 160 extracts part candidates of the unextracted part from the image data, assuming that a part of the unextracted part is behind the already extracted part. Then, the complementary part candidate extraction unit 160 causes the extraction result to be reflected in part candidate information (estimated likelihood map) to complement the part candidate information, and outputs the part candidate information after complementation to the posture state estimation unit 170.
- FIG. 3 is a block diagram showing an example of the configuration of the complementary part candidate extraction unit 160. As shown in FIG. 3
- the complementation part candidate extraction unit 160 includes a foreground part estimation unit 161, an exposure area estimation unit 162, an exposure area integration unit 163, an edge extraction area determination unit 164, an edge extraction unit 165, and a complementation candidate area determination unit 166 and a part candidate information correction unit 167.
- Each part of the complementary part candidate extraction unit 160 can acquire image data, part candidate information, an extracted part identifier, an unextracted part identifier, and body restriction information, respectively.
- the foreground part estimation unit 161 estimates a foreground part for each unextracted part based on the inputted extracted part identifier and unextracted part identifier.
- the "foreground part” is an already extracted part which overlaps with the unextracted part on the screen and behind which a part of the unextracted part may be hidden.
- the foreground part estimation unit 161 estimates the part axis of each extracted part, and identifies the extracted part where the movable range of the unextracted part and the part axis overlap with each other as the foreground part.
- the foreground part estimation unit 161 outputs the part axis of each extracted part to the exposed area estimation part 162 and associates each foreground part with the unextracted part identifier of the unextracted part and outputs it to the exposed area estimation part 162 Do.
- the exposed area estimation unit 162 estimates the exposed area for each unextracted site and each foreground site.
- the “exposed area” is an area where the unextracted area may be exposed when a part of the unextracted area is behind the foreground area.
- the exposure region estimation unit 162 estimates the edge of the foreground region from the region axis of the foreground region and the region thickness of the foreground region.
- the exposed region estimation unit 162 estimates the range of the edge of the unextracted region from the edge of the foreground region and the region thickness of the unextracted region, and sets this range as the exposed region. Then, the exposed region estimation unit 162 outputs the estimated exposed region to the exposed region integration unit 163 in association with the unextracted region identifier of the unextracted region and the foreground region identifier indicating the foreground region.
- the exposed area integration unit 163 generates an exposed area in which the exposed areas of all foreground areas are integrated for each unextracted area. Specifically, the exposed area integration unit 163 sets an area obtained by excluding the site candidates of all the extracted sites from the sum (logical sum) of the exposed areas of all the foreground areas as an integrated exposed area. Then, the exposure area integration unit 163 outputs the integrated exposure area to the edge extraction area determination unit 164.
- the edge extraction area determination unit 164 determines, for each unextracted area, an edge extraction area to be subjected to edge extraction from the input exposed area and the movable range of the unextracted area. Specifically, the edge extraction area determination unit 164 sets an area (logical product) in which the exposure area of the unextracted portion and the movable range overlap with each other as an edge extraction area. Then, the edge extraction area determination unit 164 outputs the determined edge extraction area to the edge extraction unit 165.
- the edge extraction unit 165 extracts an edge in the edge extraction region for each unextracted part. Specifically, the edge extraction unit 165 estimates an edge angle from the body constraint information, and extracts a linear component of the estimated angle from the edge extraction area of the image data. Then, the edge extraction unit 165 extracts an edge from the extracted straight line component, and the extracted edge and position information indicating on which side of the edge the unextracted portion is located is sent to the complement candidate area determination unit 166. Output.
- the complementary candidate area determination unit 166 determines, for each unextracted area, an area estimated to be partially exposed in the unextracted area as a complementary candidate area based on the input edge and position information. Specifically, the complementation candidate area determination unit 166 sets an edge as one side, and calculates a rectangular area having the width of the portion thickness of the unextracted part on the side indicated by the position information as a complementation candidate area. That is, the complementation candidate area is an area with high likelihood that the unextracted area is located with a part thereof hidden behind the already extracted part. Then, the complementation candidate area determination unit 166 associates the determined complementation candidate area with the identification information of the unextracted part, and outputs it to the part candidate information correction unit 167.
- the part candidate information correction unit 167 corrects, for each unextracted part, the part candidate information (estimated likelihood map) such that the likelihood of the unextracted part being located in the corresponding complementary candidate region is high. Specifically, part candidate information correction unit 167 increases the likelihood value of the complementation candidate region in the estimated likelihood map input from part candidate determination unit 150.
- the posture state estimation unit 170 in FIG. 1 estimates the posture state of a person (hereinafter, referred to as “subject”) included in the image data based on the part candidate information input from the part candidate extraction unit 140. Specifically, the posture state estimation unit 170 holds in advance a likelihood map (hereinafter, referred to as a “learning likelihood map”) learned from the reference model in the posture state for each posture state. Then, when the degree of coincidence between the estimated likelihood map and any of the learning likelihood maps is high, the posture state estimating unit 170 estimates the posture state corresponding to the corresponding learning likelihood map as the posture state of the subject.
- a likelihood map hereinafter, referred to as a “learning likelihood map”
- the posture state estimation unit 170 transmits information to the information output device 300 such as a display device by wired communication or wireless communication, and notifies the user of the estimation result.
- the posture state estimation unit 170 may estimate not only the posture state but also the orientation of the subject (for example, whether it is sitting facing the right or sitting facing the left).
- the posture state estimation device 100 is a computer including storage media such as a CPU (central processing unit) and a RAM (random access memory). That is, the posture state estimation device 100 operates when the CPU executes a control program to be stored.
- a CPU central processing unit
- RAM random access memory
- the posture state estimation apparatus 100 estimates that and extracts part candidates of the unextracted part from the image data. be able to. Therefore, the posture state estimation apparatus 100 can extract a part candidate of the part even when the shape on the image is not an elliptical shape or even when one of the two edges is not acquired. As a result, the posture state estimation apparatus 100 can estimate the posture state with higher accuracy than in the related art.
- FIG. 4 is a flowchart showing an example of the operation of the posture state estimation device 100.
- step S1100 part region estimation unit 130 acquires image data for one still image from monocular camera 200 via image data acquisition unit 120.
- step S1200 part region estimation section 130 performs processing for estimating the position and orientation of the reference part (hereinafter referred to as “reference part estimation processing").
- the reference part estimation process is roughly divided into a first process of estimating a person's shoulder joint position and a second process of estimating a person's torso direction.
- the part region estimation unit 130 detects an omega shape from the image data, and estimates a shoulder joint position based on the omega shape.
- FIG. 5 is a diagram for explaining the omega shape.
- the omega ( ⁇ ) shape is a characteristic edge shape of a region including the head and shoulders of a person, and among human bodies, the probability of being most stably photographed when using a surveillance camera or the like is high. It is a shape. In addition, the head and the shoulder have little change in the relative position with the human torso. Therefore, the part region estimation unit 130 detects the omega shape first to detect the positions of the head and shoulders of a person, and estimates the part region of the other part with reference to these, so that the part region is highly accurate. Estimate.
- Omega shapes can be detected, for example, using a detector made by Real AdaBoost, etc., using a sufficient number of sample images.
- a feature amount used for the detector for example, a histogram of gradient (HoG) feature amount, a Sparse feature amount, a Haar feature amount, or the like can be used.
- HoG histogram of gradient
- Sparse feature amount a Sparse feature amount
- Haar feature amount or the like.
- Boosting method for example, in addition to the Boosting method, it is also possible to use an SVM (support vector machine), a neural network, or the like.
- Part region estimation section 130 first detects omega shape 421 from image 420 of the image data.
- a pixel (a pixel of an edge portion) constituting the omega shape 421 is a digital signal "1" and the other pixels are digital signals "0".
- a relatively small rectangular area including the omega shape 421 is determined as the omega area 422.
- the lower side of the omega region 422 is referred to as a reference line 423.
- Part region estimation section 130 removes noise included in omega region 422. Specifically, part region estimation unit 130 corrects digital signal “1” present in a region surrounded by omega shape 421 among pixels of omega region 422 as digital signal “0” as noise. Do. This correction is possible, for example, by performing a so-called closing process.
- the closing process is a process of enlarging or reducing the image area at a predetermined pixel ratio or a predetermined ratio. This correction can improve the accuracy of the distance histogram described later.
- part region estimation section 130 obtains the vertical distance from reference line 423 to omega shape 421 for each position of reference line 423.
- FIG. 6 is a diagram for explaining the vertical distance from the reference line 423 to the omega shape 421.
- the part region estimation unit 130 treats the direction of the reference line 423 as the X axis and the vertical direction of the reference line 423 as the Y axis.
- the part region estimation unit 130 sets, for example, the number of pixels from the left end of the reference line 423 as an X coordinate value.
- part region estimation section 130 acquires the number of pixels in the Y-axis direction from reference line 423 to the pixels forming omega shape 421, that is, the vertical distance to omega shape 421 as vertical distance d (X).
- the pixels forming the omega shape 421 are, for example, those closest to the reference line 423 among the pixels of the digital signal “1”.
- part region estimation section 130 generates a distance histogram in which data of n (n is a positive integer) vertical distances d (X) are associated with X coordinates.
- FIG. 7 is a diagram showing an example of the distance histogram generated by the part region estimation unit 130 based on the omega region 422 shown in FIG.
- part region estimation section 130 generates distance histogram 430 showing the distribution of vertical distance d (X) in the XY coordinate system, using vertical distance d (X) as the value of the Y axis.
- the distance histogram 430 swells in a shape corresponding to the shoulder, and of those, it protrudes in a range corresponding to the center of the head.
- part region estimation section 130 applies a predetermined threshold Th to generated distance histogram 430 to perform binarization processing. Specifically, part region estimation unit 130 replaces the Y coordinate value of the X coordinate at which vertical distance d (X) is equal to or greater than threshold value Th with “1”, and vertical distance d (X) is less than threshold value Th. Replace the Y-coordinate value of X-coordinate, which is The threshold value Th is set with a high probability in the omega region 422 to be larger than the vertical distance d (X) at the upper end of the shoulder and smaller than the vertical distance d (X) at the upper end of the head.
- the binarization processing is not limited to this, and may be another method such as, for example, so-called Otsu binarization (Otsu method).
- FIG. 8 shows an example of the result of binarizing the distance histogram 430 shown in FIG.
- the range 441 to be “1” indicates the range of the X coordinate of the image area of the central part of the head (hereinafter referred to as “head area”). Further, the entire range 442 including the range 441 where “1” is to be shown indicates the range of the X coordinate of the image area of the shoulder (hereinafter referred to as “the shoulder area”). Therefore, part region estimation unit 130 extracts the range in the X axis direction of omega region 422 in image 420 of the image data as the range in the X axis direction of the shoulder region, and the range 441 in the X axis direction to be "1". Is extracted as an X-axis direction range of the head region.
- the part region estimation unit 130 calculates various parameters indicating the position and the orientation of the reference part based on the extracted shoulder region and head region.
- FIG. 9 is a diagram for explaining various parameters indicating the reference part.
- the part region estimation unit 130 uses H (xh, yh), RSE (x_rse), RD (x_rd), RS (x_rs, y_rs), as symbols indicating the position of the reference part. It is assumed that RSU (y_rsu) and LS are used. The parentheses attached to each symbol indicate parameters in the XY coordinate system.
- H is the center of gravity of the head.
- RSE is the position of the end of the right shoulder.
- RD is the distance in the X-axis direction from the center of gravity of the head to the end of the right shoulder.
- RS is the position of the right shoulder joint (hereinafter referred to as “right shoulder position”).
- RSU is at the top of the right shoulder.
- LS is the position of the left shoulder joint (hereinafter referred to as "left shoulder position").
- Part region estimation section 130 calculates the value of each parameter, for example, as follows.
- the part region estimation unit 130 determines the right shoulder region based on whether or not a person (body) is facing the monocular camera 200 from the shoulder region extracted based on the result of the binarization processing. . Part region estimation unit 130 determines whether a person is facing monocular camera 200 based on whether the skin color component of the color information of the head region is equal to or greater than a predetermined threshold. Here, it is assumed that a person is facing the monocular camera 200, and the left shoulder region toward the image is determined to be the right shoulder region.
- part region estimation section 130 calculates the barycentric position of the right shoulder region as right shoulder position RS (x_rs, y_rs).
- the part region estimation unit 130 also calculates the barycentric position H (xh, yh) of the head, and the distance in the Y-axis direction between the barycentric position H (xh, yh) and the omega shape 421 (hereinafter referred to as “head
- the right shoulder position RS (x_rs, y_rs) may be calculated using the height ⁇ h ′ ′).
- the part region estimation unit 130 determines a value that is a predetermined ratio with respect to the head height ⁇ h as a distance from the center of gravity H of the head to the right shoulder position RS in the X-axis direction It should be (xh-x_rs). Further, for example, the part region estimation unit 130 may set a position lower than the shoulder height by a value ⁇ h / 2 that is half the head height ⁇ h as the Y coordinate y_rs of the right shoulder position RS.
- part region estimation section 130 calculates a point at which the slope of the edge of omega shape 421 (that is, the change rate of the distance histogram) exceeds a threshold as position RSE (x_rse) of the end of the right shoulder. Then, part region estimation section 130 calculates distance RD (x_rd) in the X-axis direction between center-of-gravity position H of the head and position RSE of the end of the right shoulder.
- Part region estimation section 130 also calculates left shoulder position LS in the same manner.
- each parameter is not limited to the above-mentioned example.
- part lengths such as shoulder width (for example, the distance between the right shoulder position RS and the left shoulder position LS) may be stored in the physical restriction information storage unit 110 as one of physical restriction information.
- part region estimation section 130 may calculate each parameter using the body constraint information.
- part region estimation unit 130 performs the second process with reference to the reference part correspondence table held in advance as one of the body restriction information in body restriction information storage unit 110.
- the reference part correspondence table is a table in which a combination of the center of gravity H of the head, the right shoulder position RS, and the left shoulder position LS is described in association with the orientation of the body estimated from the position indicated by the combination. . That is, the reference part table is a table describing the relative positional relationship of each part.
- the combination of the center of gravity H of the head, the right shoulder position RS, and the left shoulder position LS is hereinafter referred to as "the position of the reference portion”.
- the orientation of the body estimated from the position of the reference part is hereinafter referred to as "the orientation of the reference part”.
- the reference site is, as mentioned above, an omega shaped part that shows the head and shoulders of a person. Therefore, the orientation of the reference part is the orientation of the human body (body).
- the part region estimation unit 130 derives the direction of the reference part corresponding to the position of the reference part calculated from the image data from the reference part correspondence table.
- part region estimation unit 130 sets the center of gravity position H of the head as the origin, and the length between the center of gravity position H of head and right shoulder position RS or left shoulder position LS is 1 Use the normalized values to derive the quasi-site orientation.
- the right shoulder position RS and the left shoulder position LS may be described in the reference part correspondence table.
- a line passing through the center of gravity H of the head and the right shoulder position RS or the left shoulder position LS and a vertical straight line passing through the center of gravity H of the head (hereinafter referred to as "head vertical line" The angle formed by) may be described.
- the reference part correspondence table describes the distance between the center of gravity H of the head and the left shoulder LS when the distance between the center of gravity H of the head and the right shoulder position RS is 1. It is good.
- the part region estimation unit 130 derives the direction of the reference part by calculating parameters corresponding to the parameters described in the reference part correspondence table.
- FIG. 10 is a diagram showing an example of the content of the reference portion correspondence table.
- the reference part correspondence table 450 describes the projection angle 452, the coordinates 453 of the left shoulder position LS, the coordinates 454 of the center of gravity H of the head, and the direction 455 of the reference part in association with the identifier 451.
- Each coordinate is expressed, for example, using a predetermined two-dimensional coordinate system parallel to the two-dimensional coordinate system of the screen, with the right shoulder position RS as the origin.
- the projection angle 452 is, for example, an angle of the predetermined two-dimensional coordinate system (that is, the installation angle ⁇ shown in FIG. 2) with respect to the XZ plane of the three-dimensional coordinate system 410 described in FIG.
- orientation 455 of the reference portion is represented by, for example, a rotation angle with respect to each of the XYZ axes of the three-dimensional coordinate system 410 described in FIG.
- Each coordinate may be expressed using a coordinate system in which other lengths such as the arm length and the height are one.
- the part region estimation unit 130 estimates the position and orientation of the reference part using the body constraint information. This is the end of the description of the reference part estimation process.
- part region estimation unit 130 performs processing for estimating a part region for each part based on the estimated position and orientation of the reference part (hereinafter referred to as “part region estimation processing”). .
- part region estimation unit 130 performs part region estimation processing with reference to the part region correspondence table held in advance as one of the body restriction information in body restriction information storage unit 110.
- the part region correspondence table is a table in which the position and the direction of the reference part are described in association with the part regions of the other parts.
- the part region estimation unit 130 derives a part region corresponding to the position and orientation of the reference part estimated from the image data from the part region correspondence table.
- the region of interest is defined, for example, by the pixel position of the image of the image data. Therefore, the part region estimation unit 130 determines, for all pixels in the entire image of the image data, whether each pixel belongs to a part region of any part.
- FIG. 11 is a diagram showing an example of the content of the part / region correspondence table.
- the part region correspondence table 460 is associated with the identifier 461, and the projection angle 462, the position 463 of the head and shoulder area (reference part), the direction 464 of the head and shoulder area (reference part), and each part Describe the area 465 of
- Each position and area are represented, for example, by values of a two-dimensional coordinate system of an image.
- the projection angle 462 is, for example, an angle of the predetermined two-dimensional coordinate system (that is, the installation angle ⁇ shown in FIG. 2) with respect to the XZ plane of the three-dimensional coordinate system 410 described in FIG.
- the position 463 of the head and shoulder area is, for example, the right shoulder position RS.
- the orientation 464 of the head and shoulder region is represented by, for example, a rotation angle with respect to each of the XYZ axes of the three-dimensional coordinate system 410 described in FIG.
- the region 465 of each portion is represented by, for example, the center coordinates and the radius of the circle when the region is approximated by a circle. The radius is, in other words, the part length.
- the direction 464 of the head and shoulder area may not necessarily be described in the part area correspondence table 460.
- the body constraint information may have a configuration other than that described above.
- the body constraint information limits a region (that is, a region of regions) in which a region connected to the predetermined region may exist, based on at least one of the length of the predetermined region and the angle of the joint, for example.
- the physical constraint information includes, for example, at least one of the ratio of the length between a certain part and another part and the movable angle of the joint.
- the physical constraint information defines that the length of the upper arm is 0.6 when the shoulder width is 1.
- the body constraint information describes, for each part, part length ratio and three degrees of freedom of movement (X-axis direction, Y-axis direction, Z-axis direction) based on the joint near the trunk Information to be included.
- the body constraint information is, for example, a file of the following description when the site ID of the upper right arm is “3” and the ratio of the site length of the upper right arm to the site length of the shoulder is “0.8”
- the length of the right upper arm can be defined by the program source.
- the body constraint information is, for example, a file of the following description when the region ID of the upper right arm is "3" and the ratio of the thickness of the upper right arm to the length of the shoulder region is "0.2"
- the program source can define the thickness of the upper right arm.
- the joint ID of the right shoulder is “100”, the site ID of the shoulder is “1”, and the site ID of the upper right arm is “3”.
- the movable direction of the upper right arm is (-60.0, 90.0) for the X-axis, (-90.0, 90.0) for the Y-axis, and (-90.0, 90.0) for the Z-axis Shall be
- the physical constraint information can define the degree of freedom for the right shoulder joint of the right upper arm, for example, by the following description file or program source.
- the information indicating the connection relationship between the joint and the part indicated by the joint ID and the part ID and the information indicating the movable direction and angle of each joint may be described in different files.
- the body constraint information may be described by information obtained by projecting each position on a two-dimensional coordinate system.
- the position information is unique in three dimensions, its value may differ depending on the projection angle.
- the movable direction and the angle are two-dimensional values. Therefore, in the case of holding such a value as the physical restriction information, the physical restriction information storage unit 110 is also required to hold information on the projection angle.
- part area estimation unit 130 completes the estimation of the part area, information indicating whether or not it is a part area of a part for each pixel is extracted as part area data for all pixels of the entire image of the image data as part area data. Output to unit 140.
- the part region data may have, for example, a structure in which pixel information Kij indicating whether or not the part region of any part corresponds to all the pixel positions (i, j) of the image data.
- Each element of the pixel information Kij takes, for example, “1” when it belongs to the part region of the corresponding part, and “0” when it does not.
- k1 corresponds to the region of the right upper arm
- k2 corresponds to the region of the right forearm.
- the partial region data may indicate which partial region the region corresponds to for each partial region set in advance in the image, or may indicate the coordinates of the outer edge of the partial region for each portion.
- the part area correspondence table describes the part area corresponding to the normalized reference part. Further, other information such as the right shoulder position RS and the left shoulder position LS may be described in the region region data, as in the case of the reference region correspondence table described above.
- the part region estimation unit 130 derives a part region of each part by calculating parameters corresponding to the parameters described in the part region correspondence table.
- FIG. 12 is a diagram showing an example of the content of part region data. Here, in order to simplify the description, the positions of the respective parts in the upright state are illustrated together.
- part region data indicates a part region 471 of the right upper arm and a part region 472 of the right forearm. As described above, these region regions 471 and 472 are estimated based on the position and orientation of the previously estimated reference region 473.
- the part region estimation unit 130 estimates the part region of each part using the body constraint information. This is the end of the description of the part region estimation process.
- the part candidate extraction unit 140 extracts part candidates for part regions for each part, and generates part candidate information indicating the extracted part candidates.
- estimate likelihood map generation process a process of generating an estimated likelihood map as part candidate information
- the part candidate extraction unit 140 determines the position of the part by determining an image feature suitable for representing the position and orientation of the part for each pixel in the part region of each part from the image data.
- the likelihood value indicating the likelihood of is calculated.
- the part candidate extraction unit 140 uses the likelihood value calculated from the image data, the part candidate extraction unit 140 generates an estimated likelihood map that indicates the distribution of likelihood values of each pixel.
- the likelihood value may be a value normalized to be in the range of 0 to 1, or may be a real number including a positive integer or a negative number.
- a technique of recognizing a face as a target of interest in an image can be adopted using a strong classifier that integrates a plurality of weak classifiers.
- This technology integrates the sum of a plurality of weak classifiers based on rectangle information by AdaBoost to create a strong classifier, and cascades the strong classifiers to recognize a target of interest in an image.
- the image feature amount for example, a scale-in variant feature transform (SIFT) feature amount can be adopted (see, for example, Non-Patent Document 1 and Non-Patent Document 2).
- SIFT scale-in variant feature transform
- the SIFT feature amount is particularly effective for detecting a portion that can rotate in various directions, such as an arm, because it is not affected by the scale change, rotation, and translation of an object to be detected. That is, the SIFT feature value is suitable for the present embodiment in which the posture state is defined by relative joint positions and angles of two or more parts.
- the classifier Hk is generated by the AdaBoost algorithm. That is, at the time of generation of the strong classifier Hk, with respect to a plurality of learning images prepared in advance for each part, whether it is the upper right arm and whether it is the right forearm or not with desired accuracy. Learning is repeated until it can be determined.
- the strong classifier Hk is generated by cascading a plurality of weak classifiers.
- the part candidate extraction unit 140 calculates the image feature amount for each part and for each pixel
- the part candidate extraction unit 140 inputs the image feature amount to the strong classifier Hk.
- the part candidate extraction unit 140 calculates the sum of values obtained by multiplying the outputs of the weak classifiers constituting the strong classifier Hk by the reliability ⁇ obtained in advance for each weak classifier.
- the part candidate extraction unit 140 subtracts the predetermined threshold value Th from the calculated sum to calculate the likelihood value ck for each part and for each pixel.
- c1 represents the likelihood value of the upper right arm
- c2 represents the likelihood value of the right forearm.
- the part candidate extraction unit 140 determines, for each pixel, whether the pixel is included in any part region, and if it is included, the likelihood value is calculated using the discriminator of the part, and if it is not included
- the likelihood value of the part may be zero.
- the part candidate extraction unit 140 calculates the determinant (Kij) of the pixel information output from the part region estimation unit 130 and the determinant (Cij) of the likelihood value of each pixel calculated independently of the part region.
- the result of the integration of may be used as the final estimated likelihood map.
- FIG. 13 is a diagram illustrating an example of the estimated likelihood map. Here, only the likelihood value of one part (for example, the upper right arm) of the estimated likelihood map is shown, and the higher the likelihood value, the darker the shaded area is. As shown in FIG. 13, the estimated likelihood map 478 represents the distribution of the likelihood that the part is located.
- the data structure is
- the part candidate extraction unit 140 generates an estimated likelihood map. This is the end of the description of the first example of the details of the estimated likelihood map generation process.
- the part candidate extraction unit 140 generates an estimated likelihood map by extracting parallel lines from edges included in image data, as in the technique described in Patent Document 1, for example.
- the part candidate extraction unit 140 associates the length of the shoulder joint with the standard thickness value of each part, which is held in advance as one of the body restriction information in the body restriction information storage unit 110.
- Parallel lines are extracted with reference to the correspondence table.
- the part candidate extraction unit 140 searches a set of parallel lines separated by a distance corresponding to the standard thickness of the part in the part region while rotating the determination direction by 360 degrees. Then, if there is a corresponding set of parallel lines, part candidate extraction unit 140 repeats the process of voting for each pixel of the area surrounded by those parallel lines, and the final vote for each pixel is made. Generate an estimated likelihood map based on the numbers.
- the estimated likelihood map and the learning likelihood map include the direction of parallel lines and the number of votes (hereinafter referred to as “the likelihood value of the direction”) for each pixel and each part.
- the part candidate extraction unit 140 determines the direction in which the likelihood value of the direction is the highest as the main edge direction of the part.
- the posture state estimation unit 170 takes the sum of the likelihood values of all pixels for each direction, and determines that the direction with the highest sum is the direction with the highest likelihood value of the direction. good.
- the part candidate extraction unit 140 generates the estimated likelihood map using the body constraint information. This concludes the description of the second example of the details of the estimated likelihood map generation process.
- part candidate judgment unit 150 specifies an already extracted part and an undetected part. Specifically, the part candidate determination unit 150 determines a part satisfying the predetermined condition as an already extracted part, and determines a part not satisfying the predetermined condition as an unextracted part.
- the predetermined condition is, for example, in the case of an estimated likelihood map, that the average value of the values exceeds a predetermined threshold or that the number of pixels exceeding the predetermined threshold exceeds a predetermined threshold. .
- a target image including a human head 511, a torso 512, an upper left arm 513, and a left forearm 514, and an upper right arm 515 and a right forearm 516 which are behind the left arm, as shown in FIG. Assume that 510 is input. Further, as in the technique described in Patent Document 1, the site candidate extraction unit 140 extracts site candidates by extracting parallel lines.
- part candidate judgment unit 150 sets head 511, body 512, upper left arm 513, and left forearm 514 as extracted parts, and upper right arm 515 and right forearm 516 as unextracted parts.
- step S1600 the complementary part candidate extraction unit 160 performs a process of extracting the part candidate of the unextracted part and complementing the part candidate information (hereinafter, referred to as "part candidate complementing process").
- FIG. 17 is a flowchart showing an example of the part candidate complementing process.
- FIGS. 18 to 23 are schematic diagrams showing how the part candidate information is complemented by the part candidate complementing process.
- the region candidate complementing process will be described with reference to FIGS. 17 to 23.
- the foreground part estimation unit 161 estimates the part axis of each extracted part. Specifically, for example, when the external shape of the part indicated by the part candidate of the already extracted part can approximate an ellipse, the foreground part estimation unit 161 sets the major axis of the ellipse as the part axis. In addition, the foreground part estimation unit 161 may approximate a region where the average value of the likelihood values of the pixels in the region exceeds a predetermined threshold to an ellipse, and use the major axis of the ellipse as the region axis.
- the foreground part estimation unit 161 adopts the most parallel component in the part candidate as the axial direction of the part, and the pixel for which the value of the axial likelihood is equal to or more than a predetermined threshold A straight line passing through the center of gravity of the area including the point may be used as the part axis.
- the foreground part estimation unit 161 selects one unextracted part, and acquires the movable range and the part thickness of the unextracted part.
- the movable range of the unextracted part can be estimated from, for example, each joint position indicated by the part axis of the already extracted part and body restriction information indicating the movable range for each joint of the unextracted part.
- the part thickness of the unextracted part can be obtained, for example, from the physical restriction information.
- the foreground part estimation unit 161 specifies, as a foreground part, an already extracted part whose part axis overlaps with the movable range of the unextracted part under selection.
- the movable range 545 of the upper right arm includes the part axes 541 to 543 of the head, body, and upper left arm. Therefore, the foreground part estimation unit 161 identifies the head, torso, and upper left arm as the foreground part.
- the foreground part estimation unit 161 extracts the already extracted part whose number of pixels exceeding the predetermined threshold is equal to or more than the predetermined threshold in the movable range of the unextracted part.
- the site may be identified as a foreground site.
- the acquisition of the part axis may be performed not by the foreground part estimation unit 161 but by the exposure area estimation unit 162 in the latter stage.
- step S 1604 the exposure area estimation unit 162 selects one foreground region and acquires the region thickness.
- the part thickness of the foreground part can be obtained, for example, from body constraint information.
- the exposed region estimation unit 162 estimates the edge of the already extracted region being selected from the region axis and the region thickness of the already extracted region. Specifically, for example, as illustrated in FIG. 19, the exposed region estimation unit 162 sets a rectangle 563 having two line segments at a distance of half the region thickness from the region axis as the opposite side of the extracted region. Let it be an edge.
- the exposed area estimation unit 162 estimates the exposed area of the unextracted area under selection for the already extracted area under selection from the edge of the extracted area and the thickness of the unextracted area.
- an edge a distance from the rectangle 563) of the already extracted portion which is parallel to the portion axis 543 of the already extracted portion and has a length of 1.2 is Extract two line segments that become part thickness of unextracted part.
- the exposure area estimation unit 162 obtains a rectangle 565 having the two extracted line segments as opposite sides, and sets the rectangle 565 as the maximum range of the edge of the unextracted part.
- the exposed region estimation unit 162 sets the region between the rectangle 563 which is the edge of the extracted region and the rectangle 565 which is the maximum range of the edge of the unextracted region as the exposed region 573 of the unextracted region under selection. .
- the exposed region estimation unit 162 sets the region thickness and region axis of the extracted region, such as setting the rectangle 565 which is the maximum range of the edge of the unextracted region to 1.2 times the region thickness of the extracted region. It may be determined based on the length of In this case, the part thickness of the unextracted part may be acquired not by the foreground part estimation unit 161 but by the complementation candidate area determination unit 166 in the latter stage.
- step S1607 the exposure area estimation unit 162 determines whether there is a foreground part for which the process of estimation of the exposure area has not been performed yet. If there is an unprocessed foreground part (S1607: YES), the exposed area estimation unit 162 returns to step S1604 and selects the next foreground part.
- the exposure area estimation unit 162 can, as shown in FIGS. 19 to 21, expose areas 571 corresponding to the part axes 541 to 543 of the head, body, and upper left arm. Estimate ⁇ 573.
- exposed area estimation section 162 proceeds to step S1608.
- step S1608 the exposure area integration unit 163 obtains the sum of all the exposure areas estimated for the unextracted part under selection.
- the exposed area integration unit 163 sets an area obtained by excluding site candidates of all already extracted sites from the total of the exposed areas as an integrated exposed area.
- the site candidate of the already extracted site may be an area surrounded by the obtained edge by obtaining an edge from the site thickness as described above, or may be an area where the value of the estimated likelihood map is equal to or more than a predetermined threshold .
- the exposed area integration unit 163 is, as shown in FIG. 22, the head, torso, upper left arm, and the sum of exposed areas 571 to 573 of the head, torso, and upper left arm. And the region excluding the left forearm region candidates 531 to 534 are acquired.
- step S1610 the edge extraction area determination unit 164 determines an area where the integrated exposure area and the movable range 545 (see FIG. 18) of the unextracted portion under selection overlap as an edge extraction area.
- the edge extraction unit 165 estimates the angle of the edge of the non-extracted part being selected from the body restriction information, and extracts the linear component of the estimated angle with respect to the edge extraction region of the image data. Do.
- the angle of the edge may be, for example, an angle of three degrees centered on the joint on the reference site side.
- the edge extraction unit 165 extracts an edge from the extracted linear component, and determines which side of the edge the unextracted portion is located. Which side of the edge the unextracted part is located can be determined based on which side of the edge the extracted part corresponding to the base exposure region is located.
- the edge extraction unit 165 extracts the upper edge 525 of the upper right arm, and determines that the upper right arm is located below the edge 525.
- step S1612 the complementation candidate area determination unit 166 determines whether the edge of the unextracted part being selected has been extracted. If an edge is extracted (S1612: YES), the complementation candidate area determination unit 166 proceeds to step S1613. Further, when the edge is not extracted (S1612: NO), the complementation candidate region determination unit 166 proceeds to step S1615 described later.
- step S 1613 the complementation candidate region determination unit 166 sets the length of the part axis of the unextracted part under selection and the width of the part width on the side where the unextracted part under selection is located with reference to the extracted edge. Set a rectangular area of Then, the complementation candidate area determination unit 166 determines this rectangular area as a complementation candidate area.
- the complementation candidate region determination unit 166 sets the upper edge 525 of the upper right arm as the upper long side and the part thickness of the upper right arm as shown in FIG.
- a rectangular area to be a width is determined as a complement candidate area 583.
- region is not limited to a rectangle, It can be set as other shapes, such as an ellipse on the basis of the edge of an unextracted site
- step S1614 the part candidate information correction unit 167 corrects the part candidate information so that the likelihood of the unextracted part being selected being located in the determined complementary candidate area is high.
- the part candidate information correction unit 167 increases the value of the complementary candidate area in the estimated likelihood map, and weights the complementary candidate area. That is, the part candidate information correction unit 167 corrects the estimated likelihood map so that the unextracted part is easily extracted in the complementation candidate area.
- step S1615 foreground part estimation unit 161 determines whether or not there is an unextracted part that has not been subjected to the correction processing of part candidate information. If there is an unprocessed unextracted part (S1615: YES), the foreground part estimation unit 161 returns to step S1602 and selects the next unextracted part. Further, if there is no unprocessed unextracted part (S1615: NO), the foreground part estimation unit 161 returns to the processing of FIG.
- the complementary part candidate extraction unit 160 can extract part candidates of the upper right arm and the right forearm, and can complement the part candidate information.
- the posture state estimation unit 170 determines whether the posture state of the subject is in any of the posture states set in advance as the determination target based on the region candidate information for which complementation has been appropriately performed. Determine if applicable.
- Posture state estimation section 170 makes this determination based on whether or not any of the learning likelihood map and the estimated likelihood map match.
- the posture state estimation unit 170 determines whether the learning likelihood map matches the estimated likelihood map based on whether the matching degree is equal to or higher than a predetermined level. Perform degree determination processing.
- Posture state estimation section 170 first binarizes the estimated likelihood map and each learning likelihood map using predetermined threshold values. Specifically, the posture state estimation unit 170 sets the likelihood value for each pixel and each part to the digital signal “0” when it is equal to or more than a predetermined threshold, and the digital signal “when it is less than the predetermined threshold. Convert to 1 ".
- FIG. 24 is a diagram showing an example of a state after binarizing the estimated likelihood map shown in FIG.
- the pixels of the digital signal “1” are represented in gray, and the pixels of the digital signal “0” are represented in white.
- the estimated likelihood map 479 after binarization represents the distribution of a portion that is highly likely to be located.
- posture state estimation unit 170 takes the product of likelihood values binarized for each pixel and for each part between the estimated likelihood map and the learning likelihood map, and all The sum of the values for the pixel and all parts of the image is taken as the evaluation value. Specifically, posture state estimation section 170 superimposes the estimated likelihood map and the learning likelihood map in a predetermined positional relationship, multiplies the likelihood value information after binarization for each pixel, and multiplies them. Calculate the sum for all pixels and parts.
- the posture state estimation unit 170 shifts the positional relationship of superposition of the estimated likelihood map and the learning likelihood map by movement and rotation, and performs the above-described arithmetic processing on each positional relationship. Then, posture state estimation unit 170 obtains the maximum value among the obtained evaluation values as a final evaluation value representing the degree of coincidence with the learning likelihood map. Then, when there is a learning likelihood map in which the evaluation value is equal to or greater than a predetermined threshold, posture state estimation unit 170 determines that the learning likelihood map matches the estimated likelihood map.
- the threshold value an appropriate value is set in advance by learning or the like.
- the posture state estimation unit 170 may not necessarily binarize the estimation likelihood map and the learning likelihood map. In this case, the posture state estimation unit 170 can more accurately determine the degree of coincidence between the learning likelihood map and the estimated likelihood map. When binarization is performed, the posture state estimation unit 170 can determine the degree of coincidence at high speed.
- posture state estimation section 170 determines the degree of coincidence between the estimated likelihood map and the learning likelihood map. This is the end of the description of the first example of the matching degree determination process.
- the posture state estimation unit 170 superimposes the estimated likelihood map and the learning likelihood map so that the main edge directions of the parts coincide with each other, and calculates the degree of coincidence.
- the subsequent processing is the same as that of the first example described above.
- the method in which the direction of the edge is taken into consideration can add constraints to the positional relationship of superposition between the estimated likelihood map and the learning likelihood map, thereby reducing the processing load.
- the posture state estimation unit 170 may use only the information on the edge direction when calculating the degree of coincidence between the estimated likelihood map and the learning likelihood map. In this case, for example, the posture state estimation unit 170 determines the degree of coincidence of the angles formed by the edge directions of the respective designated parts among a plurality of designated parts with an evaluation value representing the degree of coincidence between the estimated likelihood map and the learning likelihood map. Do. Then, when the evaluation value is within the predetermined range, the posture state estimation unit 170 determines that the posture of the subject is in the posture state corresponding to the corresponding learning likelihood map.
- the edge direction corresponds to the direction of the part axis. Therefore, such posture state estimation estimates the direction of each part axis and the angle of each joint from image data, and the degree of agreement between the estimated part axis direction and joint angle with the reference model in each posture state It corresponds to evaluating.
- the method of determining the degree of coincidence using only the edge direction can eliminate the processing of repeatedly calculating a plurality of evaluation values while rotating the image, thereby further reducing the processing load. . This is the end of the description of the second example of the matching degree determination process.
- Posture state estimation section 170 proceeds to step S1800 when one of the learning likelihood maps and the estimated likelihood maps match (S1700: YES). Further, when the learning likelihood map and the estimated likelihood map do not match (S1700: NO), posture state estimation section 170 proceeds to step S1900.
- step S1800 posture state estimation unit 170 notifies the user of the posture state corresponding to the learning likelihood map that matches the estimated likelihood map via information output device 300, and proceeds to step S1900.
- step S1900 part region estimation section 130 determines whether or not an instruction to end processing has been issued by a user operation or the like. If there is no processing end instruction (S1900: NO), part region estimation section 130 returns to step S1100, and proceeds to processing for the next still image. Further, part region estimation section 130 ends a series of processing when there is an instruction to end processing (S1900: YES).
- the posture state estimation apparatus 100 estimates that and extracts the part candidate of the unextracted part from the image data. It can be extracted.
- posture state estimation apparatus 100 estimates that and extracts the unextracted part from the image data. Extract the site candidate of the site. Therefore, for example, when a part of the upper right arm of a person facing the left side is a shade of the upper left arm located on the near side, the posture state estimation apparatus 100 extracts a candidate for the upper right arm. Can. Then, the posture state estimation device 100 can estimate the posture state using the region candidate of the upper right arm. That is, the posture state estimation device 100 can estimate the posture state of an object having joints of human or the like with higher accuracy than in the prior art.
- the posture state estimation apparatus 100 uses a likelihood map indicating the distribution of likelihood for each part, for example, even when the right arm is contained in the outer shape of the body on the image, it is said that "the right arm is bent”. It can be determined whether or not it is in the posture state.
- posture state estimation apparatus 100 estimates a part region which is a movable region of a designated part and lowers the likelihood value for regions other than the part region, the accuracy of the likelihood map can be improved.
- the posture state estimation apparatus 100 may estimate only a certain posture state which is specifically designated, and may output as an estimation result whether or not the posture state corresponds to the designated posture state.
- the image data used for object detection may be data of an image captured by a stereo camera or a plurality of cameras.
- the posture state estimation apparatus 100 may use image data captured by one camera and position information of an object obtained from installation parameters of the stereo camera.
- the posture state estimation apparatus 100 includes image data captured by one of the cameras and position information of an object obtained from installation parameters of each camera. You may use.
- part region estimation unit 130 may not perform the above-described reference part estimation processing.
- part region estimation unit 130 may hold body direction information.
- the method of estimation of the part area performed by the part area estimation unit 130 is not limited to the above-described example.
- the part area estimation unit 130 extracts an edge part (hereinafter simply referred to as an “edge”) of the image from the image data, and selects each part area based on the range of Y coordinate values of the area surrounded by the edge. It may be estimated. Specifically, for example, in the region surrounded by the edge, the part region estimation unit 130 estimates a region from the position at which the value of the Y coordinate is the highest to 20% as a region of the head.
- the region estimation unit 130 may set the region of 15% to 65% as the region of the trunk, the region of 55% to 85% as the region of the knee, and the region of 75% to 100%. It is estimated as the region under the knee. In this case, the value corresponding to the percentage of each area is the physical constraint information.
- the part region estimation unit 130 may extract a moving body by taking a background difference between images in the base moving image data, and set the entire region including the extracted region as a candidate of the part region of each part. As a result, it is possible to speed up the process when estimating the part region.
- posture state estimation apparatus 100 estimates the position of a portion one by one in order of proximity to the reference portion, and repeats the processing of estimating the portion region of the next portion based on the estimated position, A part region may be estimated.
- posture state estimation apparatus 100 does not necessarily have to perform part region estimation.
- the part candidate extraction unit 140 calculates the likelihood value uniformly for all areas of the image.
- the posture state estimation unit 170 may set a learning likelihood map corresponding to the installation angle ⁇ of the single-eye camera 200 as a comparison target.
- the posture state estimation apparatus 100 may hold body restriction information for each subject and perform posture state estimation.
- the posture state estimation apparatus 100 may use a region indicating a part candidate as part candidate information as in the technique described in Patent Document 1.
- the complementation part candidate extraction unit 160 moves, for example, a rectangle having the same size as the determined complementation area 583 (see FIG. 23) on the map of the extracted complementation part candidates. Then, the complementation part candidate extraction unit 160 extracts a position (for example, the coordinates of three vertexes of the rectangle) at which the sum of the likelihood values of the pixels included in the rectangle is the largest.
- the present invention can be applied to the posture state estimation method described in Patent Document 1.
- the method of estimating the posture state performed by the posture state estimation unit 170 is not limited to the above-described example.
- the posture state estimation unit 170 may estimate the posture state using information on a human reference model (hereinafter referred to as “reference model information”).
- the reference model information is, for example, information (for example, joint angle and position on the image of each joint) for each state (hereinafter referred to as “image posture state”) on the image when viewing a certain posture state from a certain viewpoint (for example, Region length and movable range) on the image of each region.
- image posture state information on the image when viewing a certain posture state from a certain viewpoint (for example, Region length and movable range) on the image of each region.
- reference model information is a constraint on the physique and posture of the reference model.
- the reference model information is, for example, information indicating a joint position or the like when the joint position or the like is viewed from the camera viewpoint for each posture state. Then, for example, the posture state estimation unit 170 estimates the posture state by searching the reference model information for the posture state in which the joint position most closely matches the subject.
- the present invention is useful as a posture state estimation device and a posture state estimation method capable of accurately estimating the posture state of an object having a joint.
- posture state estimation device 110 body constraint information storage unit 120 image data acquisition unit 130 part region estimation unit 140 part candidate extraction unit 150 part candidate determination unit 160 complemented part candidate extraction unit 161 foreground part estimation unit 162 exposed area estimation unit 163 exposed area Integration unit 164 edge extraction area determination unit 165 edge extraction unit 166 complementation candidate area determination unit 167 part candidate information correction unit 170 posture state estimation unit 200 monocular camera 300 information output device
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Begin
部位ID:3
長さの比:0.8
End
Begin
部位ID:3
太さの比:0.2
End
Begin
関節ID:100
部位ID:1
部位ID:3
可動方向:rx,ry,rz
角度:(-60.0,90.0)、(-90.0,90.0)、(-90.0,90.0)
End
110 身体制約情報格納部
120 画像データ取得部
130 部位領域推定部
140 部位候補抽出部
150 部位候補判断部
160 補完部位候補抽出部
161 前景部位推定部
162 露出領域推定部
163 露出領域統合部
164 エッジ抽出領域決定部
165 エッジ抽出部
166 補完候補領域決定部
167 部位候補情報修正部
170 姿勢状態推定部
200 単眼カメラ
300 情報出力装置
Claims (10)
- 関節により接続された複数の部位を有する物体を撮影した画像データに基づいて前記物体の姿勢状態の推定を行う姿勢状態推定装置であって、
前記画像データから、前記部位の部位候補の抽出を行う部位候補抽出部と、
前記部位候補抽出部により前記部位候補が抽出されなかった未抽出部位の一部が、前記部位候補抽出部により前記部位候補が抽出された既抽出部位の陰になっているものと推定して、前記画像データから前記未抽出部位の部位候補の抽出を行う補完部位候補抽出部と、
抽出された前記部位候補に基づいて、前記物体の姿勢状態の推定を行う姿勢状態推定部と、を有する、
姿勢状態推定装置。 - 前記補完部位候補抽出部は、
前記既抽出部位の部位候補に基づいて、前記未抽出部位の一部が露出していると推定される領域を補完候補領域として決定し、決定した補完候補領域に絞って前記未抽出部位の部位候補の抽出を行う、
請求項1記載の姿勢状態推定装置。 - 前記補完部位候補抽出部は、
前記既抽出部位の部位候補に基づいて前記既抽出部位の部位軸を推定し、推定した前記部位軸に基づいて前記補完候補領域を決定する、
請求項2記載の姿勢状態推定装置。 - 前記補完部位候補抽出部は、
前記既抽出部位の部位候補に基づいて前記既抽出部位の部位太さを推定し、推定した前記部位太さに基づいて前記補完候補領域を決定する、
請求項2記載の姿勢状態推定装置。 - 前記補完部位候補抽出部は、
前記既抽出部位の部位候補に基づいて前記未抽出部位の可動範囲を推定し、推定した前記可動範囲に基づいて前記補完候補領域を決定する、
請求項2記載の姿勢状態推定装置。 - 前記補完部位候補抽出部は、
前記既抽出部位の部位候補に基づいて前記未抽出部位のエッジの角度を推定し、推定した前記角度の直線成分を前記画像データから抽出し、抽出した前記直線成分に基づいて前記補完候補領域を決定する、
請求項2記載の姿勢状態推定装置。 - 前記補完部位候補抽出部は、
前記既抽出部位が複数存在するとき、全ての前記既抽出部位の前記部位候補を前記補完候補領域から除外する、
請求項2記載の姿勢状態推定装置。 - 前記姿勢状態推定部は、
前記画像データから推定される、各部位軸の方向に対応する情報に基づいて、前記物体の姿勢状態の推定を行う、
請求項1記載の姿勢状態推定装置。 - 前記物体は、人である、
請求項1記載の姿勢状態推定装置。 - 関節により接続された複数の部位を有する物体を撮影した画像データに基づいて前記物体の姿勢状態の推定を行う姿勢状態推定方法であって、
前記画像データから、前記部位の部位候補の抽出を行うステップと、
前記部位候補抽出部により前記部位候補が抽出されなかった未抽出部位の一部が、前記部位候補抽出部により前記部位候補が抽出された既抽出部位の陰になっているものと推定して、前記画像データから前記未抽出部位の部位候補の抽出を行うステップと、
抽出された前記部位候補に基づいて、前記物体の姿勢状態の推定を行うステップと、を有する、
姿勢状態推定方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201280001912.XA CN102971768B (zh) | 2011-01-24 | 2012-01-10 | 姿势状态估计装置及姿势状态估计方法 |
US13/820,206 US9646381B2 (en) | 2011-01-24 | 2012-01-10 | State-of-posture estimation device and state-of-posture estimation method |
US15/482,010 US10600207B2 (en) | 2011-01-24 | 2017-04-07 | Posture state estimation apparatus and posture state estimation method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2011-011860 | 2011-01-24 | ||
JP2011011860A JP5715833B2 (ja) | 2011-01-24 | 2011-01-24 | 姿勢状態推定装置および姿勢状態推定方法 |
Related Child Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/820,206 A-371-Of-International US9646381B2 (en) | 2011-01-24 | 2012-01-10 | State-of-posture estimation device and state-of-posture estimation method |
US15/482,010 Continuation US10600207B2 (en) | 2011-01-24 | 2017-04-07 | Posture state estimation apparatus and posture state estimation method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012101962A1 true WO2012101962A1 (ja) | 2012-08-02 |
Family
ID=46580543
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2012/000090 WO2012101962A1 (ja) | 2011-01-24 | 2012-01-10 | 姿勢状態推定装置および姿勢状態推定方法 |
Country Status (4)
Country | Link |
---|---|
US (2) | US9646381B2 (ja) |
JP (1) | JP5715833B2 (ja) |
CN (1) | CN102971768B (ja) |
WO (1) | WO2012101962A1 (ja) |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9412010B2 (en) * | 2011-07-15 | 2016-08-09 | Panasonic Corporation | Posture estimation device, posture estimation method, and posture estimation program |
JP2014090841A (ja) * | 2012-11-02 | 2014-05-19 | Sony Corp | 情報処理装置および情報処理方法、並びにプログラム |
JP6155785B2 (ja) * | 2013-04-15 | 2017-07-05 | オムロン株式会社 | 画像処理装置、画像処理装置の制御方法、画像処理プログラムおよびその記録媒体 |
JP6474396B2 (ja) * | 2014-06-03 | 2019-02-27 | 住友重機械工業株式会社 | 人検知システム及びショベル |
JP2016162072A (ja) | 2015-02-27 | 2016-09-05 | 株式会社東芝 | 特徴量抽出装置 |
US9650039B2 (en) * | 2015-03-20 | 2017-05-16 | Ford Global Technologies, Llc | Vehicle location accuracy |
JP6590609B2 (ja) | 2015-09-15 | 2019-10-16 | キヤノン株式会社 | 画像解析装置及び画像解析方法 |
US11854308B1 (en) * | 2016-02-17 | 2023-12-26 | Ultrahaptics IP Two Limited | Hand initialization for machine learning based gesture recognition |
US11841920B1 (en) | 2016-02-17 | 2023-12-12 | Ultrahaptics IP Two Limited | Machine learning based gesture recognition |
US11714880B1 (en) | 2016-02-17 | 2023-08-01 | Ultrahaptics IP Two Limited | Hand pose estimation for machine learning based gesture recognition |
US9870622B1 (en) * | 2016-07-18 | 2018-01-16 | Dyaco International, Inc. | Systems and methods for analyzing a motion based on images |
US20180121729A1 (en) * | 2016-11-02 | 2018-05-03 | Umbo Cv Inc. | Segmentation-based display highlighting subject of interest |
JP2019091437A (ja) * | 2017-11-10 | 2019-06-13 | 株式会社リコー | 対象検出システム、対象検出方法、プログラム |
JP6988406B2 (ja) * | 2017-11-27 | 2022-01-05 | 富士通株式会社 | 手位置検出方法、手位置検出装置、及び手位置検出プログラム |
JP6773829B2 (ja) * | 2019-02-21 | 2020-10-21 | セコム株式会社 | 対象物認識装置、対象物認識方法、及び対象物認識プログラム |
JP7263094B2 (ja) * | 2019-04-22 | 2023-04-24 | キヤノン株式会社 | 情報処理装置、情報処理方法及びプログラム |
JP7350602B2 (ja) * | 2019-10-07 | 2023-09-26 | 株式会社東海理化電機製作所 | 画像処理装置、およびコンピュータプログラム |
JP7297633B2 (ja) * | 2019-10-07 | 2023-06-26 | 株式会社東海理化電機製作所 | 画像処理装置、およびコンピュータプログラム |
JP7312079B2 (ja) * | 2019-10-07 | 2023-07-20 | 株式会社東海理化電機製作所 | 画像処理装置、およびコンピュータプログラム |
JP7401246B2 (ja) * | 2019-10-08 | 2023-12-19 | キヤノン株式会社 | 撮像装置、撮像装置の制御方法、及びプログラム |
CN110826495A (zh) * | 2019-11-07 | 2020-02-21 | 济南大学 | 基于面部朝向的身体左右肢体一致性跟踪判别方法及系统 |
TWI790152B (zh) * | 2022-03-31 | 2023-01-11 | 博晶醫電股份有限公司 | 動作判定方法、動作判定裝置及電腦可讀儲存媒體 |
CN115500819A (zh) * | 2022-09-13 | 2022-12-23 | 江苏科技大学 | 一种应用于康复训练系统调整站位的方法 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007010893A1 (ja) * | 2005-07-19 | 2007-01-25 | Nec Corporation | 関節物体位置姿勢推定装置及びその方法ならびにプログラム |
JP2010211705A (ja) * | 2009-03-12 | 2010-09-24 | Denso Corp | 乗員姿勢推定装置 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4419543B2 (ja) * | 2003-12-05 | 2010-02-24 | コニカミノルタホールディングス株式会社 | 検出装置および検出方法 |
JP2007310707A (ja) * | 2006-05-19 | 2007-11-29 | Toshiba Corp | 姿勢推定装置及びその方法 |
US9165199B2 (en) * | 2007-12-21 | 2015-10-20 | Honda Motor Co., Ltd. | Controlled human pose estimation from depth image streams |
US8963829B2 (en) * | 2009-10-07 | 2015-02-24 | Microsoft Corporation | Methods and systems for determining and tracking extremities of a target |
-
2011
- 2011-01-24 JP JP2011011860A patent/JP5715833B2/ja not_active Expired - Fee Related
-
2012
- 2012-01-10 US US13/820,206 patent/US9646381B2/en active Active
- 2012-01-10 WO PCT/JP2012/000090 patent/WO2012101962A1/ja active Application Filing
- 2012-01-10 CN CN201280001912.XA patent/CN102971768B/zh not_active Expired - Fee Related
-
2017
- 2017-04-07 US US15/482,010 patent/US10600207B2/en not_active Expired - Fee Related
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2007010893A1 (ja) * | 2005-07-19 | 2007-01-25 | Nec Corporation | 関節物体位置姿勢推定装置及びその方法ならびにプログラム |
JP2010211705A (ja) * | 2009-03-12 | 2010-09-24 | Denso Corp | 乗員姿勢推定装置 |
Non-Patent Citations (1)
Title |
---|
TOMOKI MURAKAMI ET AL.: "Jiko Occlusion o Fukumu Jinbutsu Shisei no Kyori Gazo ni yoru Suitei", DAI 65 KAI (HEISEI 15 NEN) ZENKOKU TAIKAI KOEN RONBUNSHU(2), JINKO CHINO TO NINCHI KAGAKU, 25 March 2003 (2003-03-25), pages 2-361 - 2-362 * |
Also Published As
Publication number | Publication date |
---|---|
JP5715833B2 (ja) | 2015-05-13 |
US20170213360A1 (en) | 2017-07-27 |
CN102971768B (zh) | 2016-07-06 |
US9646381B2 (en) | 2017-05-09 |
JP2012155391A (ja) | 2012-08-16 |
US20130259391A1 (en) | 2013-10-03 |
US10600207B2 (en) | 2020-03-24 |
CN102971768A (zh) | 2013-03-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5715833B2 (ja) | 姿勢状態推定装置および姿勢状態推定方法 | |
JP5771413B2 (ja) | 姿勢推定装置、姿勢推定システム、および姿勢推定方法 | |
JP5873442B2 (ja) | 物体検出装置および物体検出方法 | |
US9818023B2 (en) | Enhanced face detection using depth information | |
US9262674B2 (en) | Orientation state estimation device and orientation state estimation method | |
Holte et al. | A local 3-D motion descriptor for multi-view human action recognition from 4-D spatio-temporal interest points | |
Medioni et al. | Identifying noncooperative subjects at a distance using face images and inferred three-dimensional face models | |
US10043279B1 (en) | Robust detection and classification of body parts in a depth map | |
CN103514432A (zh) | 人脸特征提取方法、设备和计算机程序产品 | |
JP2012123667A (ja) | 姿勢推定装置および姿勢推定方法 | |
Hua et al. | Pedestrian detection by using a spatio-temporal histogram of oriented gradients | |
Unzueta et al. | Efficient generic face model fitting to images and videos | |
Dupuis et al. | Robust radial face detection for omnidirectional vision | |
CN108694348B (zh) | 一种基于自然特征的跟踪注册方法及装置 | |
Chen et al. | Extracting and matching lines of low-textured region in close-range navigation for tethered space robot | |
Han et al. | RGB-D human identification and tracking in a smart environment | |
Dai et al. | Research on recognition of painted faces | |
Wang et al. | Mining discriminative 3D Poselet for cross-view action recognition | |
Khemmar et al. | Face Detection & Recognition based on Fusion of Omnidirectional & PTZ Vision Sensors and Heteregenous Database | |
Sharma et al. | Eco: Egocentric cognitive mapping | |
Unzueta et al. | Efficient deformable 3D face model fitting to monocular images | |
Brauers | Person tracking and following by a mobile robotic platform | |
Ferreira | Cambada@ home: Deteção e Seguimento de Humanos | |
Chen et al. | Robust facial feature detection and tracking for head pose estimation in a novel multimodal interface for social skills learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201280001912.X Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12738713 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13820206 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 12738713 Country of ref document: EP Kind code of ref document: A1 |