KR20170025535A - Method of modeling a video-based interactive activity using the skeleton posture datset - Google Patents
Method of modeling a video-based interactive activity using the skeleton posture datset Download PDFInfo
- Publication number
- KR20170025535A KR20170025535A KR1020150122086A KR20150122086A KR20170025535A KR 20170025535 A KR20170025535 A KR 20170025535A KR 1020150122086 A KR1020150122086 A KR 1020150122086A KR 20150122086 A KR20150122086 A KR 20150122086A KR 20170025535 A KR20170025535 A KR 20170025535A
- Authority
- KR
- South Korea
- Prior art keywords
- mutual
- data set
- features
- modeling
- group
- Prior art date
Links
Images
Classifications
-
- G06K9/00335—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G06K9/00342—
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Image Analysis (AREA)
Abstract
Description
And a technique for generating a mutual activity model based on a video using a skeleton attitude data set.
Although computer vision and artificial intelligence communities have received more attention in recent decades, human perception remains a challenging problem due to changes in appearance, mutual occlusion, and the interaction of multiple objects.
While previous techniques have attempted to perceive human activity using the behavior of human components as an input feature, recent technologies have focused on techniques for collecting low-level features. For example, due to the limitations of image processing techniques, there is a tendency to concentrate on techniques for collecting low-level features such as spatial spatial-temporal features instead of techniques for representing the human body, such as a skeleton.
A technique for representing human interactive actions from video captured from surveillance cameras is presented.
Within a group or between groups, a technique for modeling interactions between mutual human objects is presented.
By omitting non-interacted objects, the goal is to reduce the computational cost.
The aim is to improve the quality of the training data set of the function because the singular values are not included in the data set.
The goal is to improve classification accuracy in distinguishing between single object action recognition and interactive group activity recognition.
A method for modeling an interactive activity according to an exemplary embodiment includes receiving a data set of a two-dimensional skeleton position extracted from a video, calculating position coordinates of the object from the input data set, Comprising the steps of: calculating tracking characteristics including a motion velocity and a motion direction of an object; determining a mutual object based on the mutual zone corresponding to the object and the calculated tracking characteristics; determining a skeleton data Calculating features from the set, and modeling the calculated features into topics of single human actions and intergroup activities.
The computing the positional coordinates of the object according to an embodiment includes detecting positional coordinates for the object using four joints of the torso from the input data set.
The mutual objects and the non-mutual objects corresponding to the mutual object according to an embodiment are determined through the mutual potential zone and the tracking characteristics.
The step of calculating tracking characteristics including the motion velocity and the motion direction of the object according to an embodiment may include extracting the motion direction between the spatially-temporal joint distance of the object and the human objects, .
The modeling step according to an embodiment includes generating a probability model for the single human actions and the mutual group activities using a modeling algorithm.
According to an embodiment of the present invention, there is provided a method for identifying mutual objects, comprising the steps of receiving position coordinates of an object, determining single mutual potential zones located within a predetermined range from the object based on position coordinates of the input object, Calculating a ratio of overlapping regions for each object based on single mutual potential zones; and comparing the calculated ratio with a threshold assigned to a group ID for each object to identify the object.
The step of determining the single mutual potential zones according to an embodiment includes determining the single mutual potential zones based on the position coordinates of the object and the radius of the circle.
The ratio of the single mutual potential zones and the overlap region according to one embodiment is identified for each object.
The ratio according to one embodiment is compared to a threshold value for determining the group ID of each object.
A method of configuring a feature data set according to an exemplary embodiment includes receiving a group ID, comparing the number of objects for each group corresponding to the group ID,
And Extracting features for at least one of the plurality of coordinates from the extracted data set, recognizing a data set corresponding to the extracted features, and obtaining an intra-object data set and an inter-object characteristic data set based on the recognized data set .The step of extracting features in accordance with an embodiment includes extracting features for x = y coordinates when there is one object in the group.
The step of extracting the features according to an embodiment may comprise: if there is more than one object in the group,
And And extracting features for the coordinates of the coordinates.The method of constructing a feature data set according to an embodiment may further include classifying the object into two groups of non-mutual objects and mutual objects in consideration of the comparison result of the number of objects.
The step of classifying the objects into two groups of non-mutual objects and mutual objects according to an exemplary embodiment includes extracting spatial-temporal joint distances and motion direction characteristics for the objects.
The step of extracting the features according to an embodiment includes extracting at least one or more features from the data set and the inter-object characteristic for the intra-object characteristic and the mutual activity recognition for single action recognition.
A method for generating a probability model according to an exemplary embodiment includes receiving a feature data set, clustering features in the feature data sets into codewords by applying a K-average clustering algorithm, Mapping inter-object features to codeword histograms of actions and activities, encoding words using a hierarchical model based on the mapped histogram, and using the encoded words to calculate a probability And outputting the model.
The step of outputting the probability model according to an embodiment includes generating the probability model based on topic modeling based on the metrology model.
The interactive modeling program according to an embodiment includes an instruction set that receives a data set of a two-dimensional skeleton position extracted from a video, an instruction set that calculates position coordinates of the object from the input data set, A set of instructions for determining a mutual object based on the calculated tracking characteristics and a mutual zone corresponding to the object, a set of instructions for determining a mutual object corresponding to the determined mutual object And a set of instructions for modeling the computed features into topics of single human actions and mutual group activities.
According to embodiments, human interactive actions can be represented from video captured from surveillance cameras.
According to embodiments, interactions between human objects can be modeled within a group or between groups.
According to embodiments, omitting non-interacted objects may reduce the computational cost.
According to embodiments, the quality of the training data set of the function can be improved since the singular value is not included in the data set.
According to embodiments, the classification accuracy can be improved in distinguishing between single object action recognition and interactive group activity recognition.
1 is a flowchart of a modeling method for single action and mutual activity recognition.
Fig. 2 is a diagram showing the 14-joint human posture, the determination of the center point, the distance of the object and the direction of movement.
FIG. 3 is a view for explaining interaction zone determination and object group establishment. FIG.
4 is a diagram for explaining the determination of four features using joint position information.
Figure 5 shows the process of interaction zone identification and object group creation.
FIG. 6 shows a process of configuring two feature data sets divided into an intra-object feature data set and an inter-feature feature data set.
Figure 7 shows the process of codebook generation and topic modeling for two feature data sets.
Fig. 8 is a diagram for explaining an embodiment for mapping one feature vector to a histogram of code words. Fig.
9 is a diagram showing a hierarchical model for a topic model of a four-level structure.
Hereinafter, embodiments will be described in detail with reference to the accompanying drawings. However, the scope of the rights is not limited or limited by these embodiments. Like reference symbols in the drawings denote like elements.
The terms used in the following description are chosen to be generic and universal in the art to which they are related, but other terms may exist depending on the development and / or change in technology, customs, preferences of the technician, and the like. Accordingly, the terminology used in the following description should not be construed as limiting the technical thought, but should be understood in the exemplary language used to describe the embodiments.
Also, in certain cases, there may be a term chosen arbitrarily by the applicant, in which case the meaning of the detailed description in the corresponding description section. Therefore, the term used in the following description should be understood based on the meaning of the term, not the name of a simple term, and the contents throughout the specification.
1 is a flowchart of a modeling method for single action and mutual activity recognition.
In an interactive modeling method according to an exemplary embodiment, a data set of a two-dimensional skeleton location extracted from a video is input (step 101).
The input data includes a human object skeleton with joint position information.
The output corresponding to the input data is a probability model based on the single object actions and interaction group activities needed to support the classifier. To this end, a method for modeling an interaction according to an embodiment calculates the positional coordinates of an object from an input data set (step 102). That is, in order to estimate the position of the joint, it is necessary to determine the position of the joint. For this purpose, the position coordinates of the object are calculated through the
For example, the interactions modeling method may detect positional coordinates for an object using four joints of the torso from the input data set to calculate the positional coordinates of the object.
According to Yang et al., Each human posture includes 14 joint elements.
14 joint elements will be described later in detail with reference to FIG.
Next, a method for modeling an interaction according to an exemplary embodiment calculates tracking characteristics including the movement speed and the direction of movement of the object from the coordinates of the calculated object (step 103). In addition, the method for modeling an interaction according to an exemplary embodiment may further include determining a mutual object based on the mutual zone corresponding to the object and the calculated tracking characteristics (step 104), extracting features from the data set of the skeleton for the determined mutual object (Step 105).
For example, in order to calculate tracking characteristics, an interactive modeling method according to one embodiment extracts the spatial-temporal joint distance of an object and the direction of motion between human objects, and extracts from the location data set of the skeleton.
The method of interacting modeling according to an exemplary embodiment models the calculated features to topics of single human actions and mutual group activities (step 106), and generates a probability model using the modeling result (step 107).
For example, the interactions modeling method can generate a probability model and intergroup activities for single human actions using a modeling algorithm to model intergroup activities.
Fig. 2 is a diagram showing the 14-joint human posture, the determination of the center point, the distance of the object and the direction of movement.
As shown in FIG. 2, the
Specifically, each point shown in Fig. 2 can be calculated by the following equation (1) as the coordinates of the object.
[Equation 1]
here,
The Th frame of the human object (x).The tracking algorithm is based on the motion velocity < RTI ID = 0.0 >
And direction .Wow Frame-equivalent Wow Can be calculated from the object coordinates of [Equation 2] and [Equation 3] below.
&Quot; (2) "
&Quot; (3) "
What is important to the interaction representation is how to identify objects that interact with other objects in the current scene.
The following advantages can be obtained by performing the identification process before proceeding with the feature extraction step instead of calculating the features extracted from all the detected objects.
By omitting non-interacted objects, the computational cost can be reduced.
Since the singular values are not included in the data set, the quality of the training data set of the function can be improved. This means that single objects are not considered for detection and recognition according to the interaction.
Classification accuracy can be improved in distinguishing between single object action recognition and interactive group activity recognition.
In the present invention, an IPZ (Interaction Potential Zone) algorithm can be used.
The IPZ (Interaction Potential Zone) algorithm will be described in detail with reference to FIG.
FIG. 3 is a view for explaining interaction zone determination and object group establishment. FIG.
As shown in FIG. 3, the IPZ (Interaction Potential Zone) algorithm is a basic unit necessary for detecting the Group Interaction Zone (GIZ)
Each object has an operating zone. The operating zone is located in the periphery of the object,
≪ / RTI >Thus, based on the object center coordinates 501,
≪ / RTI >Next, the
&Quot; (4) "
here,
The The IPZ of the human object.Is the number of people with overlapping IPZs.
if,
, There is a case where there is only one object alone, as indicated byThe set of human objects has interactions that are assigned through a comparison operation that can be performed by Equation (5).
&Quot; (5) "
here
Is a threshold that controls the likelihood that a set of human objects will be placed in the same group.Group assignment can be explained in three situations.
If the current object is
, A new group identifier (GID, GroupID) is assigned if it stands alone without overlapping areas.If the object is different from the current object
, The two objects are assigned a new different group identifier (GID, GroupID).If the two objects are currently overlapping
, The two objects are assigned the same group identifier (GID, GroupID).The output is a set for the object's group identifier (GID, GroupID).
However, there are special cases for assigning group identifiers (GIDs, GroupIDs).
For example, as shown at
For this situation, it is necessary to assign a group identifier (GID, GroupID).
In this situation, consideration of dynamic objects, ie, F_v ^ X (T-1, T) ≥δ, must be taken into account in that the speed threshold is shifted or non-moving to identify the δ and the current entity.
In the speed and direction of the movement from the current position, the position of the object is calculated at the next time (next time (t + 1)) as shown in Equation (6).
&Quot; (6) "
If the next position value is within the IPZ of another group, the group identifier (GID, GroupID) of the object,
Is expressed by Equation (7) To the group identifier (GID, GroupID) of the next destination group identified by the destination identifier.
&Quot; (7) "
The case shown at
If the next position value is outside the IPZ of another group,
Is changed to a new group identifier as in Equation (8). At this time, the new group identifier includes already existing GIDs ( (Not shown).
&Quot; (8) "
May be applied to the situation indicated at 302 in FIG.
To illustrate the relationship between object joints for single object action recognition and the relationship between two objects' joints for group activity recognition interacting, the present invention proposes a joint-configuration in a space-time dimension And extracts the distance and direction between them.
4 is a diagram for explaining the determination of four features using joint position information.
Specifically, the space-time joint features are calculated based on the skeleton location as shown at 401-404.
Spatial
&Quot; (9) "
here,
and The Corresponding time Human object in Wow Lt; / RTI > is the 2D position coordinate for the joints i and j of FIG.This is one person (
Or between people Lt; / RTI >Temporal joint distance is defined as the Euclidean distance between all pairs of two joints in different frames. That is, the temporal joint distance is calculated based on (10)
and Corresponding to the frame of time Wow The distance between the pairs of limbs is measured.
&Quot; (10) "
This is a person
Or between two people Lt; / RTI >Spatial
&Quot; (11) "
This is a person
Or between two people Lt; / RTI >In the case of time
&Quot; (12) "
The angle for the pair of interactions is one person
Or between two people Lt; / RTI >Figure 5 shows the process of interaction zone identification and object group creation.
FIG. 5 collects the object center coordinates (step 501). In order to identify a mutual object, coordinates of an object are input.
Next, in order to generate the interaction zone identification and the object group, a single object area can be established (step 502) and the overlap ratio can be calculated (step 503).
It is possible to determine single mutual potential zones located within a predetermined range from the object based on the position coordinates of the input object and calculate the ratio of the overlap region for each object based on the determined single mutual potential zones. For example, to determine single mutual potential zones, single mutual potential zones can be determined based on the object's location coordinates and the radius of the circle.
In addition, the object can be identified by comparing the calculated ratio with the threshold assigned to the group ID for the object. In one example, the ratio of the single mutual potential zones to the overlap region can be identified for each object, and the ratio at this time can be compared to a threshold value for determining the group ID of each object.
Next, in order to generate the interaction zone identification and the object group, the radius may be considered in step 504 (step 504).
Each object has an operating zone. The operating zone is located in the periphery of the object,
≪ / RTI >Thus, based on the object center coordinates,
Lt; / RTI >The
If the current object is
, A new group identifier (GID, GroupID) is assigned (step 506).If the object is different from the current object
, The two objects are assigned a different group identifier (GID, GroupID) (step 505).If the two objects are currently overlapping
The same group identifier (GID, GroupID) is allocated inThe output is a set for the object's group identifier (GID, GroupID) (step 507).
However, there are special cases for assigning group identifiers (GIDs, GroupIDs).
For example, as shown at
For this situation, it is necessary to assign a group identifier (GID, GroupID).
FIG. 6 shows a process of configuring two feature data sets divided into an intra-object feature data set and an inter-feature feature data set.
In FIG. 6, the feature extraction process in
A method of configuring a feature data set according to an exemplary embodiment may first receive a group ID (step 601).
That is, in order to extract the feature data set, the objects must be distinguished into two groups based on the group ID. For example, based on the group ID, objects should be classified into two groups, non-cross-object and cross-object, and based on this, the spatial-temporal joint distance and motion direction characteristics for the object should be extracted. To this end, at least one or more features of an intra-object feature for single-action recognition and a data set and an inter-object feature for mutual activity recognition are extracted.
Next, in the method of constructing a feature data set according to an embodiment, it is possible to determine whether the number of objects is equal to or greater than 2 by comparing the number of objects for each group corresponding to the group ID (step 602).
A method of constructing a feature data set according to an embodiment includes:
And The features of at least one of the coordinates can be extracted.For this purpose, when the number of objects is two or more as a result of the determination in
If the number of objects is not equal to or greater than 2 as a result of the determination in
A method of constructing a feature data set according to an embodiment may recognize a data set corresponding to the extracted features (step 605).
Features can be used to recognize single actions without testing interaction with other objects.
If the current object has the same group ID as another object, it means that the groups are made up of more objects. These features may be implemented as in
Specifically, if only one object is in the group,
Should be calculated.In addition, an intra-object feature data set (step 606) and an inter-object feature data set may be acquired (step 607) based on the recognized data set.
The extracted features may include a spatial joint distance feature subset, a temporal joint distance feature subset, a spatial joint motion feature subset, a temporal joint motion feature subset, And a temporal joint motion feature subset.
First, the spatial joint distance feature subset
. ≪ / RTI >The Temporal joint distance feature subset
And the spatial joint motion feature subset may be expressed as . ≪ / RTI >The Temporal joint motion feature subset
. ≪ / RTI >A vector for expressing a feature extracted from a single human object can be expressed as follows.
Further, the inter-object characteristic data set may include the following components.
As a spatial joint distance,
, As a temporary joint distance , As a spatial joint motion , As a temporary joint motion .In order to express a feature extracted from an interactive human object, a vector feature can be expressed as follows.
The two feature data sets are collected frame by frame according to the input video and the two-dimensional mattress. One of the two feature data sets is for an intra-object feature data set and the other is for an inter-object feature data set.
Figure 7 shows the process of codebook generation and topic modeling for two feature data sets.
7, the modeling process of
A dual structure model including an intra-object
The model is developed based on the assumption of the "bag-of-words" approach, ie, the pachinko paring model.
The statistical analysis can be analyzed based on the histogram due to the concurrent occurrence of words, and in order to support this model, the codebook can be subjected to K-means clustering as in
Next, the probability model generation method may map
Next, the probability model generation method encodes the words using a hierarchical model based on the mapped histogram (step 706).
Then, the probability model generation method outputs the probability model using the encoded words. Specifically, a probability model for an action may be output (step 707) or a probability model for activity may be output (step 708).
The probability model generation method can generate a probability model based on topic modeling based on a metric model to output a probability model.
Fig. 8 is a diagram for explaining an embodiment for mapping one feature vector to a histogram of code words. Fig.
This corresponds to an embodiment in which an intra-object vector is mapped to a histogram.
In other words,
Can be mapped to a histogram represented by the number of words.9 is a diagram showing a hierarchical model for a topic model of a four-level structure.
To learn and recognize based on the "bag-of-words" model, it should be developed from a flexible and expressive Latent Dirichlet Allocation (LDA) such as the Pachinko Allocation Model (Li et al.
9 consists of N action words or M interaction words at the bottom level, consisting of n 1 action subtopics at the first level, m 1 interaction subtopics at the bottom level and n 2 action subtopics, and m 2 interaction subtopics, and one level at the top level.
A full report on this model is given in Li (2006).
As a result, the present invention can be used to reduce the calculation cost by omitting non-interacted objects. In addition, because no singular values are included in the data set, it is possible to improve the quality of the training data set of the function and to distinguish between single object action recognition and interactive group activity recognition So that the accuracy of the classification can be improved.
The method according to an embodiment of the present invention can be implemented in the form of a program command which can be executed through various computer means and recorded in a computer-readable medium. The computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination. The program instructions recorded on the medium may be those specially designed and constructed for the present invention or may be available to those skilled in the art of computer software. Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like. Examples of program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like. The hardware devices described above may be configured to operate as one or more software modules to perform the operations of the present invention, and vice versa.
While the invention has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. This is possible.
Therefore, the scope of the present invention should not be limited to the described embodiments, but should be determined by the equivalents of the claims, as well as the claims.
Claims (19)
Receiving a data set of a two-dimensional skeleton position extracted from a video;
Computing positional coordinates of an object from the input data set;
Calculating tracking characteristics including a motion velocity and a motion direction of the object from coordinates of the calculated object;
Determining a mutual object based on the mutual zone corresponding to the object and the calculated tracking characteristics;
Computing features from a data set of skeletons for the determined cross-object; And
Modeling the computed features into topics of single human actions and mutual group activities
/ RTI >
Wherein calculating the position coordinates of the object comprises:
Detecting position coordinates for the object using four joints of the torso from the input data set
/ RTI >
Wherein the non-mutual objects corresponding to the mutual object and the mutual object are determined through the mutual potential zone and the tracking characteristics.
Wherein the step of calculating tracking characteristics including the motion velocity and the motion direction of the object comprises:
Extracting the motion direction between the spatially-temporal joint distance of the object and the human objects, and extracting from the position data set of the skeleton
/ RTI >
Wherein the modeling comprises:
Generating a probability model for the single human actions and the mutual group activities using a modeling algorithm;
/ RTI >
Receiving position coordinates of an object;
Determining single mutual potential zones located within a predetermined range from the object based on position coordinates of the input object;
Calculating a ratio of overlapping regions for each object based on the determined single mutual potential zones; And
Identifying the object by comparing the calculated ratio with a threshold assigned to the group ID for each object
/ RTI >
Wherein determining the single mutual potential zones comprises:
Determining the single mutual potential zones based on the position coordinates of the object and the radius of the circle
/ RTI >
Wherein the ratio of the single mutual potential zones to the overlap region is identified for each object.
Wherein the ratio is in contrast to a threshold value for determining a group ID of each object.
Receiving a group ID;
Comparing the number of objects for each group corresponding to the group ID;
Considering the number of the compared objects And Extracting features for at least one of the coordinates;
Recognizing a data set corresponding to the extracted features; And
Acquiring an intra-object data set and an inter-object characteristic data set based on the recognized data set
Gt; a < / RTI >
Wherein extracting the features comprises:
If there is one object in the group, extracting features for x = y coordinates
Gt; a < / RTI >
Wherein extracting the features comprises:
If there is more than one object in the group, And Extracting features for the coordinates of the coordinates
Gt; a < / RTI >
Classifying the object into two groups of non-mutual objects and mutual objects in consideration of the comparison result of the number of objects
Further comprising the steps of:
The step of classifying the objects into two groups of non-
The spatial-temporal joint distance and motion direction features for the object are extracted
Gt; a < / RTI >
Wherein extracting the features comprises:
Extracting at least one characteristic from among a data set and an inter-object characteristic for recognizing an intra-object characteristic and a mutual activity for single action recognition
Gt; a < / RTI >
Receiving a feature data set;
Clustering features in the feature data sets into codewords by applying a K-means clustering algorithm;
Mapping intra-object features and inter-object features according to the clustering to codeword histograms of actions and activities;
Encoding the words using a hierarchical model based on the mapped histogram; And
Outputting a probability model using the encoded words
A probability model generating step of generating a probability model;
Wherein the outputting of the probability model comprises:
Generating the probability model based on topic modeling based on the metrology model
A probability model generating step of generating a probability model;
A command set that receives a data set of a two-dimensional skeleton position extracted from a video;
A set of instructions for computing positional coordinates of an object from the input data set;
Calculating a tracking feature including a motion velocity and a motion direction of the object from the position coordinates of the calculated object;
A set of instructions for determining a mutual object based on the mutual zone corresponding to the object and the calculated tracking characteristics;
A set of instructions for computing features from a data set of skeletons for the determined cross-object; And
A set of instructions for modeling the computed features into topics of single human actions and mutual group activities
A program of interactive activity modeling comprising:
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150122086A KR101762010B1 (en) | 2015-08-28 | 2015-08-28 | Method of modeling a video-based interactive activity using the skeleton posture datset |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020150122086A KR101762010B1 (en) | 2015-08-28 | 2015-08-28 | Method of modeling a video-based interactive activity using the skeleton posture datset |
Publications (2)
Publication Number | Publication Date |
---|---|
KR20170025535A true KR20170025535A (en) | 2017-03-08 |
KR101762010B1 KR101762010B1 (en) | 2017-07-28 |
Family
ID=58403728
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
KR1020150122086A KR101762010B1 (en) | 2015-08-28 | 2015-08-28 | Method of modeling a video-based interactive activity using the skeleton posture datset |
Country Status (1)
Country | Link |
---|---|
KR (1) | KR101762010B1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111093301A (en) * | 2019-12-14 | 2020-05-01 | 安琦道尔(上海)环境规划建筑设计咨询有限公司 | Light control method and system |
KR20210079542A (en) * | 2019-12-20 | 2021-06-30 | 한국전자기술연구원 | User Motion Recognition Method and System using 3D Skeleton Information |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101969230B1 (en) * | 2017-10-20 | 2019-04-15 | 연세대학교 산학협력단 | Apparatus and Method for Motion Recognition using Learning, and Recording Medium thereof |
KR102174695B1 (en) | 2018-11-15 | 2020-11-05 | 송응열 | Apparatus and method for recognizing movement of object |
KR20220090248A (en) | 2020-12-22 | 2022-06-29 | 주식회사 네오펙트 | Identical object identification device and identification method based on skeleton analysis for consecutive image frames |
KR20220170544A (en) | 2021-06-23 | 2022-12-30 | 하대수 | Object movement recognition system and method for workout assistant |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7366645B2 (en) | 2002-05-06 | 2008-04-29 | Jezekiel Ben-Arie | Method of recognition of human motion, vector sequences and speech |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8565479B2 (en) | 2009-08-13 | 2013-10-22 | Primesense Ltd. | Extraction of skeletons from 3D maps |
-
2015
- 2015-08-28 KR KR1020150122086A patent/KR101762010B1/en active IP Right Grant
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7366645B2 (en) | 2002-05-06 | 2008-04-29 | Jezekiel Ben-Arie | Method of recognition of human motion, vector sequences and speech |
Non-Patent Citations (2)
Title |
---|
W. Yang, Y. Wang, and G. Mori, "Recognizing Human Actions from Still Images with Latent Poses", in Computer Vision and Pattern Recognition (CVPR), 2010 International Conference on. San Francisco, USA, IEEE, 2010, pp. 2030-2037 |
Y. Yang and D. Ramanan, "Articulated human detection with flexible mixtures of parts", Pattern Analysis and Machine Learning, IEEE Transaction on, vol. 36, no. 9, pp. 1775-1788, Sept 2014. |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111093301A (en) * | 2019-12-14 | 2020-05-01 | 安琦道尔(上海)环境规划建筑设计咨询有限公司 | Light control method and system |
KR20210079542A (en) * | 2019-12-20 | 2021-06-30 | 한국전자기술연구원 | User Motion Recognition Method and System using 3D Skeleton Information |
Also Published As
Publication number | Publication date |
---|---|
KR101762010B1 (en) | 2017-07-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11188783B2 (en) | Reverse neural network for object re-identification | |
KR101762010B1 (en) | Method of modeling a video-based interactive activity using the skeleton posture datset | |
Jalal et al. | Shape and motion features approach for activity tracking and recognition from kinect video camera | |
Ni et al. | Multilevel depth and image fusion for human activity detection | |
Akhter et al. | Pose estimation and detection for event recognition using Sense-Aware features and Adaboost classifier | |
Soomro et al. | Predicting the where and what of actors and actions through online action localization | |
Ma et al. | A survey of human action recognition and posture prediction | |
US9098740B2 (en) | Apparatus, method, and medium detecting object pose | |
Jalal et al. | Human Depth Sensors‐Based Activity Recognition Using Spatiotemporal Features and Hidden Markov Model for Smart Environments | |
CN104599287B (en) | Method for tracing object and device, object identifying method and device | |
Liciotti et al. | People detection and tracking from an RGB-D camera in top-view configuration: review of challenges and applications | |
WO2016025713A1 (en) | Three-dimensional hand tracking using depth sequences | |
Tran et al. | Social cues in group formation and local interactions for collective activity analysis | |
KR102371127B1 (en) | Gesture Recognition Method and Processing System using Skeleton Length Information | |
Park et al. | 2D human pose estimation based on object detection using RGB-D information | |
She et al. | A real-time hand gesture recognition approach based on motion features of feature points | |
Wilson et al. | Avot: Audio-visual object tracking of multiple objects for robotics | |
Waheed et al. | A novel deep learning model for understanding two-person interactions using depth sensors | |
Ali et al. | Deep Learning Algorithms for Human Fighting Action Recognition. | |
JP2015011526A (en) | Action recognition system, method, and program, and recognizer construction system | |
Jean et al. | Body tracking in human walk from monocular video sequences | |
Wang et al. | Hand motion and posture recognition in a network of calibrated cameras | |
Mademlis et al. | Exploiting stereoscopic disparity for augmenting human activity recognition performance | |
Hayashi et al. | Upper body pose estimation for team sports videos using a poselet-regressor of spine pose and body orientation classifiers conditioned by the spine angle prior | |
Karbasi et al. | Real-time hand detection by depth images: A survey |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
A201 | Request for examination | ||
E902 | Notification of reason for refusal | ||
E701 | Decision to grant or registration of patent right | ||
GRNT | Written decision to grant |