[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20120314031A1 - Invariant features for computer vision - Google Patents

Invariant features for computer vision Download PDF

Info

Publication number
US20120314031A1
US20120314031A1 US13/155,293 US201113155293A US2012314031A1 US 20120314031 A1 US20120314031 A1 US 20120314031A1 US 201113155293 A US201113155293 A US 201113155293A US 2012314031 A1 US2012314031 A1 US 2012314031A1
Authority
US
United States
Prior art keywords
depth
coordinate system
local
plane
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US13/155,293
Inventor
Jamie D. J. Shotton
Mark J. Finocchio
Richard E. Moore
Alexandru O. Balan
Kyungsuk David Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Priority to US13/155,293 priority Critical patent/US20120314031A1/en
Assigned to MICROSOFT CORPORATION reassignment MICROSOFT CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BALAN, ALEXANDRU O., FINOCCHIO, MARK J., LEE, KYUNGSUK DAVID, MOORE, RICHARD E., SHOTTON, JAMIE D. J.
Priority to US13/688,120 priority patent/US8878906B2/en
Publication of US20120314031A1 publication Critical patent/US20120314031A1/en
Assigned to MICROSOFT TECHNOLOGY LICENSING, LLC reassignment MICROSOFT TECHNOLOGY LICENSING, LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MICROSOFT CORPORATION
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/107Static hand or arm
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/285Selection of pattern recognition techniques, e.g. of classifiers in a multi-classifier system
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/64Three-dimensional objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras

Definitions

  • HCI human computer interface
  • One technique for identifying objects such as body parts is computer vision.
  • Some computer vision techniques develop a “classifier” by analyzing one or more example images.
  • an example image is an image that contains one or more examples of the objects that are to be identified.
  • many example images need to be analyzed to adequately develop or “train” the classifier to recognize the object.
  • features are extracted from the example image. Those features which work best to identify the object may be kept for use at run time.
  • the classifier may later be used during “run time” to identify objects such as body parts.
  • a computer vision system may capture an image in real time, such as a user interacting with a computer system.
  • the computer vision system uses the classifier to identifier objects, such as the hand of the user.
  • the classifier analyzes features that are extracted from the image in order to identify the object.
  • One difficulty with computer vision is that during run time objects such as body parts could have many possible orientations relative to the camera. For example, the user might have their hand rotated at virtually any angle relative to the camera. Note that for some techniques the features that are extracted are not invariant to the possible orientations of the object. For example, the features may not be invariant to possible rotations of a user's hand.
  • the example images that are used to build the classifier could theoretically contain many different rotations.
  • example images that show a multitude of possible rotations of a hand could be used to train the classifier.
  • the accuracy of the classifier may be poor.
  • containing a multitude of rotations in the example images may lead to an overly complex classifier, which may result in slow processing speed and high memory usage at run-time
  • the features that work well for one rotation may not work well for another rotation. This may result in the classifier needing to be able to account for all of the possible rotations.
  • the features may be invariant to various orientations of the object to be identified relative to the camera.
  • the features may be rotation invariant. Therefore, fewer example images may be needed to train the classifier to recognize the object. Consequently, the complexity of the classifier may be simplified without sacrificing accuracy during run time.
  • Techniques may be used to identify objects at run time using computer vision with the use of rotation invariant features.
  • One embodiment includes a method of processing a depth map that includes the following.
  • a depth map that includes depth pixels is accessed.
  • the depth map is associated with an image coordinate system having a plane.
  • a local orientation for each depth pixel in a subset of the depth pixels is estimated.
  • the local orientation is one or both of an in-plane orientation and an out-out-plane orientation relative to the plane of the image coordinate system.
  • a local coordinate system for each of the depth pixels in the subset is determined. Each local coordinate system is based on the local orientation of the corresponding depth pixel.
  • a feature region is defined relative to the local coordinate system for each of the depth pixels in the subset.
  • the feature region for each of the depth pixels in the subset is transformed from the local coordinate system to the image coordinate system.
  • the transformed feature regions are used to process the depth map.
  • the depth map may be processed at either training time or run time.
  • One embodiment includes system comprising a depth camera and logic coupled to the depth camera.
  • the depth camera is for generating depth maps that includes a plurality of depth pixels. Each pixel has a depth value, and each depth map is associated with a 2D image coordinate system.
  • the logic is operable to access a depth map from the depth camera; the depth map is associated with an image coordinate system having a plane.
  • the logic is operable to estimate a local orientation for each depth pixel in a subset of the depth pixels.
  • the local orientation includes one or both of an in-plane orientation that is in the plane of the 2D image coordinate system and an out-out-plane orientation that is out-of-the plane of the 2D image coordinate system.
  • the logic is operable to define a local 3D coordinate system for each of the depth pixels in the subset, each local 3D coordinate system is based on the local orientation of the corresponding depth pixel.
  • the logic is operable to define a feature region relative to the local coordinate system for each of the depth pixels in the subset.
  • the logic is operable to transform the feature region for each of the depth pixels in the subset from the local 3D coordinate system to the 2D image coordinate system.
  • the logic is operable to identify an object in the depth map based on the transformed feature regions.
  • One embodiment is a computer readable storage medium having instructions stored thereon which, when executed on a processor, cause the processor to perform the following steps.
  • a depth map that includes an array of depth pixels is accessed. Each depth pixel has a depth value, and the depth map is associated with a 2D image coordinate system.
  • a local orientation for each depth pixel in a subset of the depth pixels is determined. The local orientation includes in-plane orientation that is in the plane of the 2D image coordinate system and an out-out-plane orientation that is out-of-the plane of the 2D image coordinate system.
  • a 3D model for the depth map is determined. The model includes 3D points that are based on the depth pixels, each of the points has a corresponding depth pixel.
  • a local 3D coordinate system is defined for each of the plurality of points, each local 3D coordinate system is based on the position and local orientation of the corresponding depth pixel.
  • Feature test points are determined relative to the local coordinate system for each of the points.
  • the feature test points are transformed from the local 3D coordinate system to the 2D image coordinate system for each of the feature test points.
  • An object is identified in the depth map based on the transformed feature test points.
  • FIG. 1 depicts one embodiment of a target detection and tracking system tracking a user.
  • FIG. 2 depicts one embodiment of a target detection and tracking system.
  • FIG. 3A is a flowchart of one embodiment of a process of training a machine learning classifier using invariant features.
  • FIG. 3B is a flowchart that describes a process of using invariant features to identify objects using computer vision.
  • FIG. 4A depicts a depth map of an object for which in-plane local orientation of depth pixels has been estimated based on edges, in accordance with one embodiment.
  • FIG. 4B depicts a depth map of an object for which in-plane local orientation of depth pixels has been estimated based on edges, in accordance with one embodiment.
  • FIG. 4C is a flowchart of one embodiment of a process of assigning angles to depth pixels based on edges.
  • FIG. 4D depicts a depth map of an object for which in-plane local orientation of depth pixels has been estimated based on medial axes, in accordance with one embodiment.
  • FIG. 4E depicts a depth map of an object for which in-plane local orientation of depth pixels has been estimated based on medial axes, in accordance with one embodiment.
  • FIG. 4F is a flowchart of one embodiment of a process of assigning angles to depth pixels based on medial axes.
  • FIG. 5 is a flowchart of one embodiment of a process estimating local orientation of depth pixels for out-of-plane orientation.
  • FIG. 6A and FIG. 6B depict different rotations of a point cloud model with one embodiment of a local coordinate system.
  • FIG. 7 depicts a 2D image coordinate system and a 3D local coordinate system used in various embodiments, with a corresponding feature window in each coordinate system.
  • FIG. 8 is a flowchart of one embodiment of a process of establishing a local in-plain and/or out-of-plane orientation for a depth pixel.
  • FIG. 9 illustrates an example of a computing environment in accordance with embodiments of the present disclosure.
  • FIG. 10 illustrates an example of a computing environment in accordance with embodiments of the present disclosure.
  • the features may be rotation invariant.
  • the features may also be translation invariant and/or scale invariant.
  • the features are in-plane rotation invariant.
  • the features are out-of-plane rotation invariant.
  • the features are both in-plane and out-of-plane rotation invariant.
  • the invariant features are used in a motion capture system having a capture device.
  • rotation invariant features may be used to identify a user's hand such that the hand can be tracked.
  • One example application is to determine gestures made by the user to allow the user to interact with the system. Therefore, an example motion capture system will be described. However, it will be understood that technology described herein is not limited to a motion capture system.
  • FIG. 1 depicts an example of a motion capture system 10 in which a person interacts with an application.
  • the motion capture system 10 includes a display 96 , a capture device 20 , and a computing environment or apparatus 12 .
  • the capture device 20 may include an image camera component 22 having a light transmitter 24 , light receiver 25 , and a red-green-blue (RGB) camera 28 .
  • the light transmitter 24 emits a collimated light beam. Examples of collimated light include, but are not limited to, Infrared (IR) and laser.
  • the light transmitter 24 is an LED. Light that reflects off from an object 8 in the field of view is detected by the light receiver 25 .
  • a user also referred to as a person or player, stands in a field of view 6 of the capture device 20 .
  • Lines 2 and 4 denote a boundary of the field of view 6 .
  • the capture device 20 , and computing environment 12 provide an application in which an avatar 97 on the display 96 track the movements of the object 8 (e.g., a user).
  • the avatar 97 may raise an arm when the user raises an arm.
  • the avatar 97 is standing on a road 98 in a 3-D virtual world.
  • a Cartesian world coordinate system may be defined which includes a z-axis which extends along the focal length of the capture device 20 , e.g., horizontally, a y-axis which extends vertically, and an x-axis which extends laterally and horizontally. Note that the perspective of the drawing is modified as a simplification, as the display 96 extends vertically in the y-axis direction and the z-axis extends out from the capture device 20 , perpendicular to the y-axis and the x-axis, and parallel to a ground surface on which the user stands.
  • the motion capture system 10 is used to recognize, analyze, and/or track an object.
  • Invariant features e.g., rotation invariant
  • the computing environment 12 can include a computer, a gaming system or console, or the like, as well as hardware components and/or software components to execute applications.
  • the capture device 20 may include a camera which is used to visually monitor one or more objects 8 , such as the user, such that gestures and/or movements performed by the user may be captured, analyzed, and tracked to perform one or more controls or actions within an application, such as animating an avatar or on-screen character or selecting a menu item in a user interface (UI).
  • a gesture may be dynamic, comprising a motion, such as mimicking throwing a ball.
  • a gesture may be a static pose, such as holding one's forearms crossed.
  • a gesture may also incorporate props, such as swinging a mock sword.
  • Some movements of the object 8 may be interpreted as controls that may correspond to actions other than controlling an avatar.
  • the player may use movements to end, pause, or save a game, select a level, view high scores, communicate with a friend, and so forth.
  • the player may use movements to select the game or other application from a main user interface, or to otherwise navigate a menu of options.
  • a full range of motion of the object 8 may be available, used, and analyzed in any suitable manner to interact with an application.
  • the person can hold an object such as a prop when interacting with an application.
  • the movement of the person and the object may be used to control an application.
  • the motion of a player holding a racket may be tracked and used for controlling an on-screen racket in an application which simulates a tennis game.
  • the motion of a player holding a toy weapon such as a plastic sword may be tracked and used for controlling a corresponding weapon in the virtual world of an application which provides a pirate ship.
  • the motion capture system 10 may further be used to interpret target movements as operating system and/or application controls that are outside the realm of games and other applications which are meant for entertainment and leisure. For example, virtually any controllable aspect of an operating system and/or application may be controlled by movements of the object 8 .
  • the motion capture system 10 may be connected to an audiovisual device such as the display 96 , e.g., a television, a monitor, a high-definition television (HDTV), or the like, or even a projection on a wall or other surface, that provides a visual and audio output to the user.
  • An audio output can also be provided via a separate device.
  • the computing environment 12 may include a video adapter such as a graphics card and/or an audio adapter such as a sound card that provides audiovisual signals associated with an application.
  • the display 96 may be connected to the computing environment 12 via, for example, an S-Video cable, a coaxial cable, an HDMI cable, a DVI cable, a VGA cable, or the like.
  • FIG. 2 illustrates one embodiment of a target detection and tracking system 10 including a capture device 20 and computing environment 12 that may be used to recognize human and non-human targets in a capture area (with or without special sensing devices attached to the subjects), uniquely identify them, and track them in three dimensional space.
  • the capture device 20 may be a depth camera (or depth sensing camera) configured to capture video with depth information including a depth map that may include depth values via any suitable technique including, for example, time-of-flight, structured light, stereo image, or the like.
  • the capture device 20 may include a depth sensing image sensor.
  • the capture device 20 may organize the calculated depth information into “Z layers,” or layers that may be perpendicular to a Z-axis extending from the depth camera along its line of sight.
  • the capture device 20 may include an image camera component 32 .
  • the image camera component 32 may be a depth camera that may capture a depth map of a scene.
  • the depth map may include a two-dimensional (2-D) pixel area of the captured scene where each pixel in the 2-D pixel area may represent a depth value such as a distance in, for example, centimeters, millimeters, or the like of an object in the captured scene from the camera.
  • the image camera component 32 may be pre-calibrated to obtain estimates of camera intrinsic parameters such as focal length, principal point, lens distortion parameters etc. Techniques for camera calibration are discussed in, Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330-1334, 2000, which is hereby incorporated by reference.
  • the image camera component 32 may include an IR light component 34 , a three-dimensional (3-D) camera 36 , and an RGB camera 38 that may be used to capture the depth map of a capture area.
  • the IR light component 34 of the capture device 20 may emit an infrared light onto the capture area and may then use sensors to detect the backscattered light from the surface of one or more targets and objects in the capture area using, for example, the 3-D camera 36 and/or the RGB camera 38 .
  • capture device 20 may include an IR CMOS image sensor.
  • pulsed infrared light may be used such that the time between an outgoing light pulse and a corresponding incoming light pulse may be measured and used to determine a physical distance from the capture device 20 to a particular location on the targets or objects in the capture area. Additionally, the phase of the outgoing light wave may be compared to the phase of the incoming light wave to determine a phase shift. The phase shift may then be used to determine a physical distance from the capture device to a particular location on the targets or objects.
  • time-of-flight analysis may be used to indirectly determine a physical distance from the capture device 20 to a particular location on the targets or objects by analyzing the intensity of the reflected beam of light over time via various techniques including, for example, shuttered light pulse imaging.
  • the capture device 20 may use structured light to capture depth information.
  • patterned light i.e., light displayed as a known pattern such as grid pattern or a stripe pattern
  • the pattern may become deformed in response.
  • Such a deformation of the pattern may be captured by, for example, the 3-D camera 36 and/or the RGB camera 38 and analyzed to determine a physical distance from the capture device to a particular location on the targets or objects.
  • two or more different cameras may be incorporated into an integrated capture device.
  • a depth camera and a video camera e.g., an RGB video camera
  • two or more separate capture devices may be cooperatively used.
  • a depth camera and a separate video camera may be used.
  • a video camera it may be used to provide target tracking data, confirmation data for error correction of target tracking, image capture, face recognition, high-precision tracking of fingers (or other small features), light sensing, and/or other functions.
  • the capture device 20 may include two or more physically separated cameras that may view a capture area from different angles to obtain visual stereo data that may be resolved to generate depth information. Depth may also be determined by capturing images using a plurality of detectors that may be monochromatic, infrared, RGB, or any other type of detector and performing a parallax calculation. Other types of depth map sensors can also be used to create a depth map.
  • capture device 20 may include a microphone 40 .
  • the microphone 40 may include a transducer or sensor that may receive and convert sound into an electrical signal.
  • the microphone 40 may be used to reduce feedback between the capture device 20 and the computing environment 12 in the target detection and tracking system 10 .
  • the microphone 40 may be used to receive audio signals that may also be provided by the user to control applications such as game applications, non-game applications, or the like that may be executed by the computing environment 12 .
  • the capture device 20 may include logic 42 that is in communication with the image camera component 22 .
  • the logic 42 may include a standardized processor, a specialized processor, a microprocessor, or the like that may execute instructions.
  • the logic 42 may also include hardware such as an ASIC, electronic circuitry, logic gates, etc.
  • the processor 42 may execute instructions that may include instructions for storing profiles, receiving the depth map, determining whether a suitable target may be included in the depth map, converting the suitable target into a skeletal representation or model of the target, or any other suitable instructions.
  • a capture device may include one or more onboard processing units configured to perform one or more target analysis and/or tracking functions. Moreover, a capture device may include firmware to facilitate updating such onboard processing logic.
  • the capture device 20 may include a memory component 44 that may store the instructions that may be executed by the processor 42 , images or frames of images captured by the 3-D camera or RGB camera, user profiles or any other suitable information, images, or the like.
  • the memory component 44 may include random access memory (RAM), read only memory (ROM), cache, Flash memory, a hard disk, or any other suitable storage component.
  • RAM random access memory
  • ROM read only memory
  • cache Flash memory
  • the memory component 44 may also be referred to as a computer storage medium.
  • the memory component 44 may be a separate component in communication with the image capture component 32 and the processor 42 .
  • the memory component 44 may be integrated into the processor 42 and/or the image capture component 32 .
  • some or all of the components 32 , 34 , 36 , 38 , 40 , 42 and 44 of the capture device 20 illustrated in FIG. 2 are housed in a single housing.
  • the capture device 20 may be in communication with the computing environment 12 via a communication link 46 .
  • the communication link 46 may be a wired connection including, for example, a USB connection, a Firewire connection, an Ethernet cable connection, or the like and/or a wireless connection such as a wireless 802.11b, g, a, or n connection.
  • the computing environment 12 may provide a clock to the capture device 20 that may be used to determine when to capture, for example, a scene via the communication link 46 .
  • the capture device 20 may provide the depth information and images captured by, for example, the 3-D camera 36 and/or the RGB camera 38 to the computing environment 12 via the communication link 46 .
  • the computing environment 12 may then use the depth information and captured images to, for example, create a virtual screen, adapt the user interface and control an application such as a game or word processor.
  • computing environment 12 includes gestures library 192 , structure data 198 , gesture recognition engine 190 , depth map processing and object reporting module 194 , and operating system 196 .
  • Depth map processing and object reporting module 194 uses the depth maps to track the motion of objects, such as the user and other objects. To assist in the tracking of the objects, depth map processing and object reporting module 194 uses gestures library 190 , structure data 198 and gesture recognition engine 190 . In some embodiments, the depth map processing and object reporting module 194 uses a classifier 195 and a feature library 199 to identify objects.
  • the feature library 199 may contain invariant features, such as rotation invariant features.
  • structure data 198 includes structural information about objects that may be tracked.
  • a skeletal model of a human may be stored to help understand movements of the user and recognize body parts.
  • structural information about inanimate objects, such as props may also be stored to help recognize those objects and help understand movement.
  • gestures library 192 may include a collection of gesture filters, each comprising information concerning a gesture that may be performed by the skeletal model.
  • a gesture recognition engine 190 may compare the data captured by capture device 20 in the form of the skeletal model and movements associated with it to the gesture filters in the gesture library 192 to identify when a user (as represented by the skeletal model) has performed one or more gestures. Those gestures may be associated with various controls of an application.
  • the computing environment 12 may use the gesture recognition engine 190 to interpret movements of the skeletal model and to control operating system 196 or an application based on the movements.
  • depth map processing and object reporting module 194 will report to operating system 196 an identification of each object detected and the position and/or orientation of the object for each frame. Operating system 196 will use that information to update the position or movement of an object (e.g., an avatar) or other images in the display or to perform an action on the provided user-interface.
  • an object e.g., an avatar
  • FIG. 3A is a flowchart of one embodiment of a process 350 of training a machine learning classifier using invariant features.
  • the features may be invariant to any combination of rotation, translation, and scaling.
  • Rotation invariant features include in-plane and/or out-of-plane invariance.
  • Process 350 may involve use of a capture device 20 .
  • the process 350 may create the machine learning classifier that is later used at run time to identify objects.
  • one or more example depth maps are accessed. These images may have been captured by a capture device 20 . These depth maps may be labeled such that each depth pixel has been classified, for instance manually, or procedurally using computer generated imagery (CGI). For example, each depth pixel may be manually or procedurally classified as being part of a finger, hand, torso, specific segment of a body, etc.
  • CGI computer generated imagery
  • each depth pixel may be manually or procedurally classified as being part of a finger, hand, torso, specific segment of a body, etc.
  • the labeling of the depth pixels may involve a person studying the depth map and assigning a label to each pixel, or assigning a label to a group of pixels.
  • the labels might instead be continuous in a regression problem. For example, one might label each pixel with a distance to nearby body joints. Note that because the process 350 may use rotation invariant features to train the classifier, the number of example depth maps may be kept fairly low. For example, it may not be necessary to provide example images
  • step 354 canonical features are computed using an invariant feature transform.
  • each labeled example image may be processed in order to extract rotation-invariant features.
  • a local coordinate system is defined for any given pixel using a combination of in-plane and out-of-plane orientation estimates, and depth. This local coordinate system may be used to transform a feature window prior to computing the features to achieve rotation invariance.
  • the result of step 354 may be a set of canonical features. Step 354 will be discussed in more detail with respect to FIG. 3B .
  • class labels (or continuous regression labels) are assigned to corresponding features based on the pixel labels in the example images.
  • step 358 the canonical features and corresponding labels are passed to a machine learning classification system to train a classifier 195 .
  • the features may be rotation invariant. If step 354 determined both in-plane and out-of-plane orientations, then the features may be both in-plane and out-of-plane invariant. If step 354 determined only in-plane orientations, then the features may be in-plane rotation invariant. If step 354 determined only out-of-plane orientations, then the features may be out-of-plane rotation invariant.
  • the classifier 195 may be used at run-time to classify rotationally-normalized features extracted from new input images.
  • the features may also be invariant to translation and/or scaling. In some embodiments, features that are determined to be useful at identifying objects are saved, such that they may be stored in a feature library 199 for use at run time.
  • FIG. 3B is a flowchart that describes a process 300 of using invariant features to identify objects using computer vision.
  • the features may be rotation invariant.
  • Rotation invariant features include in-plane rotation invariant, out-of-plane rotation invariant, or both.
  • the features may also be invariant to translation and/or scaling.
  • Process 300 may be performed when a user is interacting with a motion capture system 10 . Thus, process 300 could be used in a system such as depicted in FIG. 1 or 2 . Process 300 may be used in a wide variety of other computer vision scenarios.
  • a depth map is accessed.
  • the capture device 20 may be used to capture the depth map.
  • the depth map may include depth pixels.
  • the depth map may be associated with an image coordinate system. For example, each depth pixel may have two coordinates (u, v) and a depth value.
  • the depth map may be considered to be in a plane that is defined by the two coordinates (u, v). This plane may be based on the orientation of the depth camera and may be referred to herein as an imaging plane. If an object in the camera's field of view moves, it may be described as moving in-plane, out-of-plane or both.
  • rotating movement in the u, v plane (with points on the object retaining their depth values) may be referred to as in-plane rotation (axis of rotation is orthogonal to the u, v plane).
  • Rotating movement that causes changes in depth values at different rates for different points on the object may be referred to as out-of-plane rotation.
  • rotation of a hand with the palm facing the camera is one example of in-plane rotation.
  • Rotation of a hand with the thumb pointing towards and then away from the camera is one example of out-of-plane rotation.
  • the depth map is filtered.
  • the depth map may be undistorted to remove the distortion effects from the lens.
  • the depth map may be down-sampled to a lower processing resolution such that the depth map may be more easily used and/or more quickly processed with less computing overhead.
  • one or more high-variance and/or noisy depth values may be removed and/or smoothed from the depth map and portions of missing and/or removed depth information may be filled in and/or reconstructed.
  • the acquired depth map may be processed to distinguish foreground pixels from background pixels.
  • Foreground pixels may be associated with some object (or objects) of interest to be analyzed.
  • background is used to describe anything in an image that is not part of the one objects of interest. For ease of discussion, a single object will be referred to when discussing process 300 .
  • Process 300 analyzes pixels in that object of interest. These pixels will be referred to as a subset of the pixels in the depth map.
  • Steps 308 - 316 describe processing individual pixels associated with the object of interest. In general, these steps involve performing an invariant feature transform. For example, this may be a rotation invariant transform. The transform may also be invariant to translation and/or scale. Note that steps 308 - 316 are one embodiment of step 354 from FIG. 3A .
  • step 308 a determination is made whether there are more pixels in the subset to process. If so, processing continues with step 310 with one of the depth pixels.
  • step 310 a local orientation of the depth pixel is estimated.
  • the local orientation is an in-plane orientation.
  • the local orientation is an out-out-plane orientation.
  • the local orientation is both an in-plane orientation and an out-of-plane orientation. Further details of estimating a local orientation are discussed below.
  • a local coordinate system is defined for the depth pixel.
  • the local coordinate system is a 3D coordinate system.
  • the local coordinate system is based on the local orientation of the depth pixel. For example, if the user's hand moves, rotates, etc., then the local coordinate system moves with the hand. Further details of defining a local coordinate system are discussed below.
  • a feature region is defined relative to the local coordinate system for the presently selected depth pixel.
  • a feature window is defined with its center at the depth pixel.
  • One or more feature test points, feature test rectangles, Haar wavelets, or other such features may be defined based on the geometry of the feature window.
  • the feature region is transformed from the local coordinate system to the image coordinate system. Further details of performing the transform are discussed below. Note that this may involve a transformation from the 3D space of the local coordinate system to a 2D space of the depth map.
  • step 318 the transformed feature regions are used to attempt to identify one or more objects in the depth map. For example, an attempt is made to identify a user's hand. This attempt may include classifying each pixel. For example, each pixel may be assigned a probability that it is part of a hand, head, arm, certain segment of an arm, etc.
  • a decision tree is used to classify pixels. Such analysis can determine a best-guess of a target assignment for that pixel and the confidence that the best-guess is correct.
  • the best-guess may include a probability distribution over two or more possible targets, and the confidence may be represented by the relative probabilities of the different possible targets.
  • the best-guess may include a spatial distribution over 3D offsets to body or hand joint positions.
  • an observed depth value comparison between two pixels is made, and, depending on the result of the comparison, a subsequent depth value comparison between two pixels is made at the child node of the decision tree. The result of such comparisons at each node determines the pixels that are to be compared at the next node.
  • the terminal nodes of each decision tree results in a target classification or regression with associated confidence.
  • subsequent decision trees may be used to iteratively refine the best-guess of the one or more target assignments for each pixel and the confidence that the best-guess is correct. For example, once the pixels have been classified with the first classifier tree (based on neighboring depth values), a refining classification may be performed to classify each pixel by using a second decision tree that looks at the previous classified or regressed pixels and/or depth values. A third pass may also be used to further refine the classification or regression of the current pixel by looking at the previous classified or regressed pixels and/or depth values. It is to be understood that virtually any number of iterations may be performed, with fewer iterations resulting in less computational expense and more iterations potentially offering more accurate classifications or regressions, and/or confidences.
  • the decision trees may have been constructed during a training mode in which the example images were analyzed to determine the questions (i.e., tests) that can be asked at each node of the decision trees in order to produce accurate pixel classifications.
  • foreground pixel assignment is stateless, meaning that the pixel assignments are made without reference to prior states (or prior image frames).
  • One example of a stateless process for assigning probabilities that a particular pixel or group of pixels represents one or more objects is the Exemplar process.
  • the Exemplar process uses a machine-learning approach that takes a depth map and classifies each pixel by assigning to each pixel a probability distribution over the one or more objects to which it could correspond.
  • a given pixel which is in fact a tennis racquet, may be assigned a 70% chance that it belongs to a tennis racquet, a 20% chance that it belongs to a ping pong paddle, and a 10% chance that it belongs to a right arm.
  • decision trees are discussed in US Patent Application Publication 2010/0278384, titled “Human Body Pose Estimation,” by Shotton et al., published on Nov. 4, 2010, which is hereby incorporated by reference. Note that it is not required that decision trees be used.
  • Another technique that may be used to classify pixels is a Support Vector Machine (SVM).
  • Step 318 may include using a classifier that was developed during a training session such as that of FIG. 3A .
  • step 354 is to estimate a local orientation of depth pixels.
  • FIGS. 4A-4F will be referred to in order to discuss estimating a location local orientation of depth pixels with respect to the (u, v) coordinate system of the depth map.
  • the depth values are not factored in to the local orientation. Therefore, this may be considered to be an in-plane orientation.
  • FIG. 4A depicts a depth map of an object for which in-plane local orientation of depth pixels has been estimated, in accordance with one embodiment.
  • Each depth pixel is assigned a value between 0-360 degrees, in this embodiment. The assignment is made such that if the object is rotated in-plane (e.g., in the (u, v) image plane) the depth pixel will have the same local orientation, or at least very close to the same value. For example, the depth pixel may have the same angle assigned to it regardless of rotation in the (u, v) image plane.
  • the depth map has a u-axis and a v-axis.
  • the angle may be with respect to either axis, or some other axis.
  • Two example depth pixels p 1 , p 2 are shown.
  • Two points q 1 , q 2 are also depicted.
  • the point q is the nearest point on the edge of the hand to the given depth pixel.
  • a line is depicted from p to q.
  • the angle ⁇ is the angle of that line to the u-axis (or more precisely to a line that runs parallel to the u-axis).
  • the angle ⁇ serves as a way of describing a local orientation of a depth pixel that is in-plane rotation invariant.
  • FIG. 4B depicts a depth map of an object for which in-plane local orientation of depth pixels has been estimated, in accordance with one embodiment.
  • This embodiment uses a different technique for determining the angle than the embodiment of FIG. 4A .
  • the angle is based on a tangent to the object at a point q.
  • the two example depth pixels p 1 , p 2 and the two points q 1 , q 2 are depicted.
  • the angle ⁇ 1 a for point p 1 is the tangent to the hand at q 1 .
  • Similar reasoning applies for angle ⁇ 2 a . Note that if the hand were to be rotated in the (u, v) plane, that the angle ⁇ would change by the same amount for all pixels. Therefore, the angle ⁇ serves as a way of describing a local orientation of a depth pixel that is in-plane rotation invariant.
  • the depth pixels are grouped into those with angles between 60-180, those between 180-300, and those between 300-60. In actual practice, no such grouping is required. Also, note that it is not required that the angle assignment be between 0-360 degrees. For example, it could be between ⁇ 180 to +180 degrees, or another scheme. It may also be between 0-180, in which case the feature transform is rotationally invariant only up to a two-way ambiguity.
  • FIG. 4C is a flowchart of one embodiment of a process 450 of assigning an angle to a depth pixel.
  • the process 450 may be performed once for each depth pixel in an object of interest.
  • the angle is determined relative to the nearest edge of the object. Therefore, process 450 may be used for either embodiment of FIG. 4A or 4 B.
  • process 450 is one embodiment of estimating local orientation of step 310 .
  • process 450 is one embodiment of estimating in-plane local orientation.
  • edges of the object are detected.
  • the edge is one example of a reference line of the object of interest.
  • a variety of edge detection techniques may be used. Since edge detection is well-known by those of ordinary skill in the art it will not be discussed in detail. Note that edge detection could be performed in a step prior to step 310 .
  • step 456 the closest edge to the present depth pixel is determined.
  • q 1 in FIG. 4A or 4 B is determined as the closest point on the edge of the hand to depth pixel p 1 .
  • q 2 is determined as the closest point on the edge of the hand to depth pixel p 2 , when processing that depth pixel. This can be efficiently computed using, for example, distance transforms.
  • a rotation invariant angle to assign to the depth pixel is determined.
  • the angle may be defined based on the tangent to the edge of the hand at the edge point (e.g., p 1 , p 2 ). This angle is one example of a rotation invariant angle for the closest edge point. Since the closest edge point (e.g., q 1 ) is associated to the depth pixel (p 1 ), the angle may also be considered to be one example of a rotation invariant angle for the depth pixel. As noted, any convenient reference axis may be used, such as the u-axis of the depth map. This angle is assigned to the present depth pixel. Referring to FIG. 4B , ⁇ 1 b is assigned to p 1 and ⁇ 2 b is assigned to p 2 .
  • the angle may be defined based on the technique shown in FIG. 4A .
  • any convenient reference axis may be used, such as the u-axis of the depth map.
  • the angle is defined as the angle between the u-axis and the line between p and q. This angle is assigned to the present depth pixel.
  • ⁇ 1 is assigned to p 1 and ⁇ 2 is assigned to p 2 .
  • Process 450 continues until angles have been assigned to all depth pixels of interest.
  • step 460 smoothing of the results may be performed in step 460 .
  • the angle of each depth pixel may be compared to its neighbors, with outliers being smoothed.
  • FIG. 4D depicts a hand to discuss such an embodiment.
  • a medial axis may be defined as a line that is roughly a mid-point between two edges. To some extent, a medial axis may serve to represent a skeletal model.
  • FIG. 4D shows a depth pixel p 3 , with its closest medial axis point q 3 .
  • the angle ⁇ 3 a represents a local orientation of depth pixel p 3 . Note that if the hand were to be rotated in the (u, v) plane, that the angle ⁇ 3 a would change by the same amount for all pixels.
  • the angle ⁇ 3 a serves as a way of describing a local orientation of a depth pixel that is in-plane rotation invariant.
  • the angle is defined based on the line parallel to the u-axis and the line between p and q.
  • FIG. 4E depicts a hand to discuss another embodiment for determining a rotation invariant angle.
  • FIG. 4E shows a depth pixel p 3 , with its closest medial axis point q 3 .
  • the angle ⁇ 3 b represents a local orientation of depth pixel p 3 .
  • ⁇ 3 b is defined based on the tangent at point q 3 to the medial axis. Note that if the hand were to be rotated in the (u, v) plane, that the angle ⁇ 3 b would change by the same amount for all pixels. Therefore, the angle ⁇ 3 b serves as a way of describing a local orientation of a depth pixel that is in-plane rotation invariant.
  • FIG. 4F is a flowchart of one embodiment of a process 480 of assigning angles to depth pixels.
  • the angle is determined relative to the nearest medial axis of the object. Therefore, FIGS. 4D and 4E will be referred to when discussed process 480 .
  • Process 480 is one embodiment of steps 308 and 310 .
  • process 480 is one embodiment of estimating an in-plane orientation of depth pixels.
  • medial axes of the object are determined.
  • a medial axis may be defined based on the contour of the object. It can be implemented by iteratively eroding the boundaries of the object without allowing the object to break apart. The remaining pixels make up the medial axes.
  • Medial axis computation is well-known by those of ordinary skill in the art it will not be discussed in detail.
  • Example medial axes are depicted in FIGS. 4D and 4E .
  • a medial axis is one example of a reference line in the object.
  • step 486 the closest point on a medial axis to the present depth pixel is determined Referring to either FIG. 4D or 4 E, point q 3 may be determined to be the closest point to p 3 .
  • a rotation invariant angle for the depth pixel is determined.
  • the angle may be based on the tangent to the medial axis at point q 3 , as depicted in FIG. 4E .
  • the angle may also be determined based on the technique shown in FIG. 4D .
  • any convenient reference axis may be used, such as the u-axis of the depth map.
  • the angle is one example of a rotation invariant angle for the point q 3 . Since the closest medial axis point (e.g., q 3 ) is associated to the depth pixel (p 3 ), the angle may also be considered to be one example of a rotation invariant angle for the depth pixel. Referring to either FIG. 4D or 4 E, angle ⁇ 3 a or ⁇ 3 b is determined. This angle is assigned to the present depth pixel. The process 480 continues until angles have been assigned to all depth pixels of interest.
  • step 490 smoothing of the results is performed in step 490 .
  • the angle of each depth pixel may be compared to its neighbors, with outliers being smoothed.
  • FIG. 5 is a flowchart of one embodiment of a process 500 estimating local orientation of depth pixels for out-of-plane orientation.
  • the out-of-plane orientation is based on the surface normal of the object of interest at the depth pixel.
  • Process 500 is one embodiment of steps 308 - 310 and will be discussed with reference to FIG. 6A .
  • a point cloud model is developed.
  • the point cloud model may be a 3D model in which each depth pixel in the depth map is assigned a coordinate in 3D space, for example.
  • the point cloud may have one point for each depth pixel in the depth map, but that is not an absolute requirement. To facilitate discussion, it will be assumed that each point in the point cloud has a corresponding depth pixel in the depth map. However, note that this one-to-one correspondence is not a requirement.
  • the term “depth point” will be used to refer to a point in the point cloud.
  • FIG. 6A depicts a point cloud model 605 of a hand and portion of an arm.
  • the point cloud model is depicted within an (a, b, c) global coordinate system.
  • an a-axis, b-axis, and c-axis of a global coordinate system are depicted.
  • two of the axes in the global coordinate system correspond to the u-axis and v-axis of the depth map.
  • the position along the third axis in the global coordinate system may be determined based on the depth value for a depth pixel in the depth map.
  • the point cloud model 605 may be generated in another manner.
  • using a point cloud model 605 is just one way to determine a surface normal. Other techniques could be used.
  • step 504 of FIG. 5 a determination is made whether there are more depth pixels to process. Note that the processing here is of depth point in the point cloud 605 .
  • a surface normal is determined at the present point.
  • surface normal it is meant a line that is perpendicular to the surface of the object of interest.
  • the surface normal may be determined by analyzing nearby depth points.
  • the surface normal may be defined in terms of the (a, b, c) global coordinate system.
  • FIG. 6A the surface normal is depicted as the z-axis that touches the second finger of the hand.
  • the x-axis, y-axis, and z-axis form a local coordinate system for the pixel presently being analyzed. The local coordinate system will be further discussed below. Processing continues until a surface normal is determined for all depth points.
  • step 508 smoothing of the surface normals is performed.
  • surface normals is one example of how to determine a local orientation for depth pixels that may be used for out-of-plane rotation.
  • other parameters could be determined.
  • FIG. 5 was of determining a surface normal to a depth point, this is one technique for determining a local orientation of a depth pixel.
  • a local coordinate system is determined for each of the depth pixels.
  • Figures 6 A and 6 B show an object with a local coordinate system (labeled as x-axis, y-axis, z-axis) for one of the depth points.
  • the local coordinate system has three perpendicular axes, in this embodiment.
  • the origin of the local coordinate system is at one of the 3D depth points in the object of interest.
  • One axis e.g., z-axis
  • the local coordinate system is depicted relative to one of the depth points in the depth cloud 605
  • the local coordinate system is considered to be a local coordinate system for one of the depth pixels in the depth map.
  • a feature region or window 604 is also depicted in FIGS. 6A-6B .
  • the dashed lines are depicted to demonstrate the position of the feature window 604 relative to the local coordinate system.
  • the feature window 604 may be used to help define features (also referred to as “feature probes”). For example, a feature probe can be defined based on the origin of the local coordinate system and some point in the feature window. Note that the feature window may be transformed to the depth map prior to using the feature probe.
  • the local coordinate system moves consistently with the hand. For example, if the hand rotates, the local coordinate system rotates by a corresponding amount.
  • the object could be any object.
  • the local coordinate system moves consistently with the object.
  • features are defined based on the local coordinate system. Therefore, the features may be invariant to factors such as rotation, translation, scale, etc.
  • the hand has been rotated relative to the hand of FIG. 6A .
  • the x-axis and the y-axis are in the same position relative to the hand.
  • the z-axis is not depicted in FIG. 6B , but it will be understood that it is still normal to the surface at the location of the depth point.
  • the feature window 604 is also in the same position relative to the local coordinate system. Therefore, the feature window 604 is also in the same position relative to the hand. Note that this means that if a feature is defined in the local coordinate system, that the feature will automatically rotate with the hand (or other object).
  • FIG. 7 depicts an image window 702 associated with a 2D depth image coordinate system and a corresponding window 604 in a 3D local coordinate system.
  • the image window 702 represents a portion of the depth map.
  • the point p(u, v, d) represents the test pixel from the depth map, where (u, v) are image coordinates and d is depth.
  • the point q(u+ ⁇ cos( ⁇ ), v+ ⁇ sin( ⁇ ), d) represents the point of interest, also in the depth map.
  • the point of interest could be any pixel in the depth image.
  • the arrows in the image window 702 that originate from pixel p are parallel to the u-axis and the v-axis.
  • a line is depicted between the pixel p and the point of interest q.
  • the angle ⁇ is the estimated in-plane rotation, which in this example is defined as the angle between the line and a reference axis.
  • the reference axis is the u-axis, but any reference axis could be chosen.
  • the point p(u, v, d) might represent one of the depth pixels, such as p 1 .
  • the point q might represent the nearest point on the edge of the hand, such as q 1 .
  • the angle ⁇ might represent the angle ⁇ 1 a between as sown in FIG. 4A .
  • the angle ⁇ might represent the angle ⁇ 1 b between the tangent to the edge of the hand at point q 1 and some reference axis, as shown in FIG. 4B .
  • the point p(u, v, d) might represent the depth pixel p 3 .
  • the point q might represent the nearest point on the medial axis, q 3 .
  • the angle ⁇ might represent the angle ⁇ 3 a between as depicted in FIG. 4D .
  • the angle ⁇ might represent the angle ⁇ 3 b between the tangent to the medial axis at point q 3 and some reference axis, as depicted in FIG. 4E .
  • the window 604 in the local 3D coordinate system contains the point P, which corresponds to pixel p in the 2D depth map.
  • point P could be the point in the point cloud of FIGS.
  • Window 604 represents a feature window 604 in the local 3D coordinate system. Examples of a local coordinate system and feature window 604 were discussed with respect to FIGS. 6B and 6B .
  • a vector ⁇ right arrow over (n) ⁇ which corresponds to the surface normal, is depicted with its tail at point P.
  • a vector ⁇ right arrow over (V) ⁇ has its tail at point P and its head at point Q.
  • Point Q is the point in 3D space that corresponds to point q in the 2D depth map.
  • Vectors ⁇ right arrow over (r 1 ) ⁇ and ⁇ right arrow over (r 2 ) ⁇ may correspond to the x-axis and the y-axis in the local coordinate system (see, for example, FIGS. 6A-6B ). Techniques for transforming between the local 3D coordinate system and the 2D image coordinate system will now be discussed. These techniques may be used for step 316 .
  • Equation 1 states a general form for the transformation equation.
  • the transformation equation applies a rotation matrix R, a diagonal scaling matrix S, and a camera projection function ⁇ .
  • the vector ⁇ right arrow over (t) ⁇ is a translation.
  • the camera matrix projects from 3D into 2D.
  • Equation 1 deHom(.) is the matrix given by
  • the present pixel in the depth map being examined may be defined as p(u, v, d), where (u, v) are the depth map pixel coordinates and “d” is a depth value for the depth pixel.
  • the point of interest may be any point.
  • One example is the closest edge point, as discussed in FIGS. 4A-4C .
  • Another example is the closest medial axis point, as discussed in FIGS. 4D-4F .
  • these points of interest may be selected such that a local orientation of the depth pixel that is in-plane rotation invariant may be determined.
  • An estimated in-plane rotation ⁇ is also determined, as in the examples above.
  • an estimated out-of-plane rotation local orientation is determined
  • the surface normal is estimated as discussed with respect to FIG. 5 .
  • This window may be used for the feature window 604 .
  • the window scaling is defined in 3D, then the window may be given actual measurements, such that after it is projected to 2D it will scale properly.
  • the window could be defined as being 100 mm on each of three sides.
  • the feature window 604 scales properly. Referring back to FIGS. 6A-6B , the feature window 604 was depicted in two-dimensions (x, y) for clarity. However, the feature window 604 can also be defined as a three dimensional object, using the z-axis.
  • ⁇ (.) refers to a generic camera projection function that transforms a 3D point in the camera coordinate system into a pixel homogeneous coordinate.
  • the inverse transformation is given by ⁇ ⁇ 1 (.).
  • the camera projection function may be used to factor in various physical properties such as focal lengths (f 1 , f 2 ), principal point (c 1 , c 2 ), skew coefficient ( ⁇ ), lens distortion parameters etc.
  • K is a camera matrix as shown in Equation 3.
  • a more general camera projection function that does account for radial distortion can be used instead. Camera projection functions are well known and, therefore, will not be discussed in detail.
  • the rotation matrix may be computed as in Equation 4.
  • the vector ⁇ right arrow over (r 3 ) ⁇ may be a unitized version of the surface normal. Note that this may be the z-axis of the window 604 .
  • the vector ⁇ right arrow over (r 1 ) ⁇ (x-axis) may be the component of ⁇ right arrow over (V) ⁇ that is orthogonal to the surface normal. Recall that ⁇ right arrow over (V) ⁇ was defined in FIG. 7 as Q ⁇ P.
  • the vector ⁇ right arrow over (V) ⁇ may be referred to herein as an in-plane rotation-variant vector.
  • the vector ⁇ right arrow over (r 2 ) ⁇ (y-axis) may be computed as the cross product of ⁇ right arrow over (r 3 ) ⁇ and ⁇ right arrow over (r 2 ) ⁇ .
  • the translation vector ⁇ right arrow over (t) ⁇ may be computed as in Equation 8.
  • the vector ⁇ right arrow over (V) ⁇ may be computed as in Equations 9A-9C.
  • the full 3D transform may be computed as in Equations 10A and 10B.
  • T 3 ⁇ 4 K ⁇ ( [ s x 0 0 0 s y 0 0 0 s z ] ⁇ [ R 3 ⁇ 3 t ⁇ ] )
  • the direct transformation from canonical coordinates (x w , y w ) in a [ ⁇ 1,1] window to depth pixel coordinates in the depth map may be determined by pre-computing the homography transformation H as Equation 11A and then calculating x, as in Equation 11B.
  • Performing the transform in the other direction may be as in Equation 12.
  • FIG. 8 is a flowchart of one embodiment of a process 800 of establishing a local orientation for a depth pixel factoring in the different possibilities. Process 800 may be repeated for each depth pixel for which a local orientation is to be determined.
  • step 802 a determination is made whether an estimate of a local in-plane orientation is to be made. If so, then the in-plane estimate is made in step 804 .
  • the estimate of the in-plane orientation may be an angle with respect to some reference axis in the depth map (or 2D image coordinate system). If the in-plane estimate is not to be made, then the angle ⁇ may be set to a default value in step 806 . As one example, the angle ⁇ may be set to 0 degrees. Therefore, all depth pixels will have the same angles.
  • the processing to determine the local coordinate system may be the same.
  • the calculations may be performed in a similar manner by using the default value for ⁇ .
  • step 808 a determination is made whether an estimate of a local out-of-plane orientation is to be made. If so, then the out-of-plane estimate is made in step 810 . Note that if the in-plane orientation was not determined, then the out-of-plane orientation is determined in step 810 .
  • Techniques for determining a local out-of-plane orientation have been discussed with respect to FIG. 5 .
  • the estimate of the out-of-plane orientation may be a surface normal of the object at a given depth pixel or point in a point cloud model.
  • the output of the estimate may be a vector.
  • the vector may be set to a default value in step 812 .
  • the vector may be set to being parallel to the optical axis of the camera. Therefore, all depth pixels will have the same vectors.
  • the processing to determine the local coordinate system may be the same.
  • the calculations may be performed in a similar manner by using the default value for vector ⁇ right arrow over (n) ⁇ .
  • FIG. 9 illustrates an example of a computing environment including a multimedia console (or gaming console) 100 that may be used to implement the computing environment 12 of FIG. 2 .
  • the capture device 20 may be coupled to the computing environment.
  • the multimedia console 100 has a central processing unit (CPU) 101 having a level 1 cache 102 , a level 2 cache 104 , and a flash ROM (Read Only Memory) 106 .
  • the level 1 cache 102 and a level 2 cache 104 temporarily store data and hence reduce the number of memory access cycles, thereby improving processing speed and throughput.
  • the CPU 101 may be provided having more than one core, and thus, additional level 1 and level 2 caches 102 and 104 .
  • the flash ROM 106 may store executable code that is loaded during an initial phase of a boot process when the multimedia console 100 is powered ON.
  • a graphics processing unit (GPU) 108 and a video encoder/video codec (coder/decoder) 114 form a video processing pipeline for high speed and high resolution graphics processing. Data is carried from the graphics processing unit 108 to the video encoder/video codec 114 via a bus. The video processing pipeline outputs data to an A/V (audio/video) port 140 for transmission to a television or other display.
  • a memory controller 110 is connected to the GPU 108 to facilitate processor access to various types of memory 112 , such as, but not limited to, a RAM (Random Access Memory).
  • the multimedia console 100 includes an I/O controller 120 , a system management controller 122 , an audio processing unit 123 , a network interface controller 124 , a first USB host controller 126 , a second USB controller 128 and a front panel I/O subassembly 130 that are preferably implemented on a module 118 .
  • the USB controllers 126 and 128 serve as hosts for peripheral controllers 142 ( 1 )- 142 ( 2 ), a wireless adapter 148 , and an external memory device 146 (e.g., flash memory, external CD/DVD ROM drive, removable media, etc.).
  • the network interface 124 and/or wireless adapter 148 provide access to a network (e.g., the Internet, home network, etc.) and may be any of a wide variety of various wired or wireless adapter components including an Ethernet card, a modem, a Bluetooth module, a cable modem, and the like.
  • a network e.g., the Internet, home network, etc.
  • wired or wireless adapter components including an Ethernet card, a modem, a Bluetooth module, a cable modem, and the like.
  • System memory 143 is provided to store application data that is loaded during the boot process.
  • a media drive 144 is provided and may comprise a DVD/CD drive, hard drive, or other removable media drive, etc.
  • the media drive 144 may be internal or external to the multimedia console 100 .
  • Application data may be accessed via the media drive 144 for execution, playback, etc. by the multimedia console 100 .
  • the media drive 144 is connected to the I/O controller 120 via a bus, such as a Serial ATA bus or other high speed connection (e.g., IEEE 1394).
  • the system management controller 122 provides a variety of service functions related to assuring availability of the multimedia console 100 .
  • the audio processing unit 123 and an audio codec 132 form a corresponding audio processing pipeline with high fidelity and stereo processing. Audio data is carried between the audio processing unit 123 and the audio codec 132 via a communication link.
  • the audio processing pipeline outputs data to the A/V port 140 for reproduction by an external audio player or device having audio capabilities.
  • the front panel I/O subassembly 130 supports the functionality of the power button 150 and the eject button 152 , as well as any LEDs (light emitting diodes) or other indicators exposed on the outer surface of the multimedia console 100 .
  • a system power supply module 136 provides power to the components of the multimedia console 100 .
  • a fan 138 cools the circuitry within the multimedia console 100 .
  • the CPU 101 , GPU 108 , memory controller 110 , and various other components within the multimedia console 100 are interconnected via one or more buses, including serial and parallel buses, a memory bus, a peripheral bus, and a processor or local bus using any of a variety of bus architectures.
  • bus architectures can include a Peripheral Component Interconnects (PCI) bus, PCI-Express bus, etc.
  • application data may be loaded from the system memory 143 into memory 112 and/or caches 102 , 104 and executed on the CPU 101 .
  • the application may present a graphical user interface that provides a consistent user experience when navigating to different media types available on the multimedia console 100 .
  • applications and/or other media contained within the media drive 144 may be launched or played from the media drive 144 to provide additional functionalities to the multimedia console 100 .
  • the multimedia console 100 may be operated as a standalone system by simply connecting the system to a television or other display. In this standalone mode, the multimedia console 100 allows one or more users to interact with the system, watch movies, or listen to music. However, with the integration of broadband connectivity made available through the network interface 124 or the wireless adapter 148 , the multimedia console 100 may further be operated as a participant in a larger network community.
  • a set amount of hardware resources are reserved for system use by the multimedia console operating system. These resources may include a reservation of memory (e.g., 16 MB), CPU and GPU cycles (e.g., 5%), networking bandwidth (e.g., 8 kbs), etc. Because these resources are reserved at system boot time, the reserved resources do not exist from the application's view.
  • the memory reservation preferably is large enough to contain the launch kernel, concurrent system applications and drivers.
  • the CPU reservation is preferably constant such that if the reserved CPU usage is not used by the system applications, an idle thread will consume any unused cycles.
  • lightweight messages generated by the system applications are displayed by using a GPU interrupt to schedule code to render popup into an overlay.
  • the amount of memory required for an overlay depends on the overlay area size and the overlay preferably scales with screen resolution. Where a full user interface is used by the concurrent system application, it is preferable to use a resolution independent of application resolution. A scaler may be used to set this resolution such that the need to change frequency and cause a TV resynch is eliminated.
  • the multimedia console 100 boots and system resources are reserved, concurrent system applications execute to provide system functionalities.
  • the system functionalities are encapsulated in a set of system applications that execute within the reserved system resources described above.
  • the operating system kernel identifies threads that are system application threads versus gaming application threads.
  • the system applications are preferably scheduled to run on the CPU 101 at predetermined times and intervals in order to provide a consistent system resource view to the application. The scheduling is to minimize cache disruption for the gaming application running on the console.
  • a multimedia console application manager controls the gaming application audio level (e.g., mute, attenuate) when system applications are active.
  • Input devices are shared by gaming applications and system applications.
  • the input devices are not reserved resources, but are to be switched between system applications and the gaming application such that each will have a focus of the device.
  • the application manager preferably controls the switching of input stream, without knowledge the gaming application's knowledge and a driver maintains state information regarding focus switches.
  • the capture device 20 of FIG. 2 may be an additional input device to multimedia console 100 .
  • FIG. 10 illustrates another example of a computing environment that may be used to implement the computing environment 12 of FIG. 2 .
  • the capture device 20 may be coupled to the computing environment.
  • the computing environment of FIG. 10 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the presently disclosed subject matter. Neither should the computing environment 12 of FIG. 2 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment of FIG. 10 .
  • the various depicted computing elements may include circuitry configured to instantiate specific aspects of the present disclosure.
  • the term circuitry used in the disclosure can include specialized hardware components configured to perform function(s) by firmware or switches.
  • circuitry can include a general-purpose processing unit, memory, etc., configured by software instructions that embody logic operable to perform function(s).
  • an implementer may write source code embodying logic and the source code can be compiled into machine readable code that can be processed by the general purpose processing unit.
  • the computing system 220 comprises a computer 241 , which typically includes a variety of computer readable media.
  • Computer readable media can be any available media that can be accessed by computer 241 and includes both volatile and nonvolatile media, removable and non-removable media.
  • Computer readable media may comprise computer storage media and communication media.
  • Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer.
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
  • the computer 241 may also include other removable/non-removable, volatile/nonvolatile computer storage media.
  • FIG. 10 illustrates a hard disk drive 238 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 239 that reads from or writes to a removable, nonvolatile magnetic disk 254 , and an optical disk drive 240 that reads from or writes to a removable, nonvolatile optical disk 253 such as a CD ROM or other optical media.
  • removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like.
  • the hard disk drive 238 is typically connected to the system bus 221 through a non-removable memory interface such as interface 234
  • magnetic disk drive 239 and optical disk drive 240 are typically connected to the system bus 221 by a removable memory interface, such as interface 235 .
  • a basic input/output system 224 (BIOS), containing the basic routines that help to transfer information between elements within computer 241 , such as during start-up, is typically stored in ROM 223 .
  • RAM 260 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 259 .
  • FIG. 10 illustrates operating system 225 , application programs 226 , other program modules 227 , and program data 228 .
  • the drives and their associated computer storage media discussed above and illustrated in FIG. 10 provide storage of computer readable instructions, data structures, program modules and other data for the computer 241 .
  • hard disk drive 238 is illustrated as storing operating system 258 , application programs 257 , other program modules 256 , and program data 255 .
  • operating system 258 application programs 257 , other program modules 256 , and program data 255 are given different numbers here to illustrate that, at a minimum, they are different copies.
  • a user may enter commands and information into the computer 241 through input devices such as a keyboard 251 and pointing device 252 , commonly referred to as a mouse, trackball or touch pad.
  • Other input devices may include a microphone, joystick, game pad, satellite dish, scanner, or the like.
  • These and other input devices are often connected to the processing unit 259 through a user input interface 236 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB).
  • the cameras 34 , 36 and capture device 20 of FIG. 2 may define additional input devices for the computer 241 .
  • a monitor 242 or other type of display device is also connected to the system bus 221 via an interface, such as a video interface 232 .
  • computers may also include other peripheral output devices such as speakers 244 and printer 243 , which may be connected through a output peripheral interface 233 .
  • the computer 241 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 246 .
  • the remote computer 246 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 241 , although only a memory storage device 247 has been illustrated in FIG. 5 .
  • the logical connections depicted in FIG. 5 include a local area network (LAN) 245 and a wide area network (WAN) 249 , but may also include other networks.
  • LAN local area network
  • WAN wide area network
  • Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
  • the computer 241 When used in a LAN networking environment, the computer 241 is connected to the LAN 245 through a network interface or adapter 237 . When used in a WAN networking environment, the computer 241 typically includes a modem 250 or other means for establishing communications over the WAN 249 , such as the Internet.
  • the modem 250 which may be internal or external, may be connected to the system bus 221 via the user input interface 236 , or other appropriate mechanism.
  • program modules depicted relative to the computer 241 may be stored in the remote memory storage device.
  • FIG. 10 illustrates remote application programs 248 as residing on memory device 247 . It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • the disclosed technology is operational with numerous other general purpose or special purpose computing system environments or configurations.
  • Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the technology include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • the disclosed technology may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer.
  • software and program modules as described herein include routines, programs, objects, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types.
  • Hardware or combinations of hardware and software may be substituted for software modules as described herein.
  • the disclosed technology may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network.
  • program modules may be located in both local and remote computer storage media including memory storage devices.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

Technology is described for determining and using invariant features for computer vision. A local orientation may be determined for each depth pixel in a subset of the depth pixels in a depth map. The local orientation may an in-plane orientation, an out-out-plane orientation or both. A local coordinate system is determined for each of the depth pixels in the subset based on the local orientation of the corresponding depth pixel. A feature region is defined relative to the local coordinate system for each of the depth pixels in the subset. The feature region for each of the depth pixels in the subset is transformed from the local coordinate system to an image coordinate system of the depth map. The transformed feature regions are used to process the depth map.

Description

    BACKGROUND
  • Computer games and multimedia applications have begun employing cameras and software gesture recognition engines to provide a human computer interface (“HCI”). With HCI, user body parts and movements are detected, interpreted and used to control game characters or other aspects of an application.
  • One technique for identifying objects such as body parts is computer vision. Some computer vision techniques develop a “classifier” by analyzing one or more example images. As the name implies, an example image is an image that contains one or more examples of the objects that are to be identified. Often, many example images need to be analyzed to adequately develop or “train” the classifier to recognize the object. In some techniques, features are extracted from the example image. Those features which work best to identify the object may be kept for use at run time.
  • The classifier may later be used during “run time” to identify objects such as body parts. For example, a computer vision system may capture an image in real time, such as a user interacting with a computer system. The computer vision system uses the classifier to identifier objects, such as the hand of the user. In some techniques, the classifier analyzes features that are extracted from the image in order to identify the object.
  • One difficulty with computer vision is that during run time objects such as body parts could have many possible orientations relative to the camera. For example, the user might have their hand rotated at virtually any angle relative to the camera. Note that for some techniques the features that are extracted are not invariant to the possible orientations of the object. For example, the features may not be invariant to possible rotations of a user's hand.
  • To account for the multitude of possible rotations of the object (e.g., hand), the example images that are used to build the classifier could theoretically contain many different rotations. For example, example images that show a multitude of possible rotations of a hand could be used to train the classifier. At one extreme, if the example images do not contain enough possible rotations, then the accuracy of the classifier may be poor. At the other extreme, containing a multitude of rotations in the example images may lead to an overly complex classifier, which may result in slow processing speed and high memory usage at run-time For example, the features that work well for one rotation may not work well for another rotation. This may result in the classifier needing to be able to account for all of the possible rotations.
  • SUMMARY
  • Technology is described for determining and using features that may be used to identify objects using computer vision. The features may be invariant to various orientations of the object to be identified relative to the camera. For example, the features may be rotation invariant. Therefore, fewer example images may be needed to train the classifier to recognize the object. Consequently, the complexity of the classifier may be simplified without sacrificing accuracy during run time. Techniques may be used to identify objects at run time using computer vision with the use of rotation invariant features.
  • One embodiment includes a method of processing a depth map that includes the following. A depth map that includes depth pixels is accessed. The depth map is associated with an image coordinate system having a plane. A local orientation for each depth pixel in a subset of the depth pixels is estimated. The local orientation is one or both of an in-plane orientation and an out-out-plane orientation relative to the plane of the image coordinate system. A local coordinate system for each of the depth pixels in the subset is determined. Each local coordinate system is based on the local orientation of the corresponding depth pixel. A feature region is defined relative to the local coordinate system for each of the depth pixels in the subset. The feature region for each of the depth pixels in the subset is transformed from the local coordinate system to the image coordinate system. The transformed feature regions are used to process the depth map. The depth map may be processed at either training time or run time.
  • One embodiment includes system comprising a depth camera and logic coupled to the depth camera. The depth camera is for generating depth maps that includes a plurality of depth pixels. Each pixel has a depth value, and each depth map is associated with a 2D image coordinate system. The logic is operable to access a depth map from the depth camera; the depth map is associated with an image coordinate system having a plane. The logic is operable to estimate a local orientation for each depth pixel in a subset of the depth pixels. The local orientation includes one or both of an in-plane orientation that is in the plane of the 2D image coordinate system and an out-out-plane orientation that is out-of-the plane of the 2D image coordinate system. The logic is operable to define a local 3D coordinate system for each of the depth pixels in the subset, each local 3D coordinate system is based on the local orientation of the corresponding depth pixel. The logic is operable to define a feature region relative to the local coordinate system for each of the depth pixels in the subset. The logic is operable to transform the feature region for each of the depth pixels in the subset from the local 3D coordinate system to the 2D image coordinate system. The logic is operable to identify an object in the depth map based on the transformed feature regions.
  • One embodiment is a computer readable storage medium having instructions stored thereon which, when executed on a processor, cause the processor to perform the following steps. A depth map that includes an array of depth pixels is accessed. Each depth pixel has a depth value, and the depth map is associated with a 2D image coordinate system. A local orientation for each depth pixel in a subset of the depth pixels is determined. The local orientation includes in-plane orientation that is in the plane of the 2D image coordinate system and an out-out-plane orientation that is out-of-the plane of the 2D image coordinate system. A 3D model for the depth map is determined. The model includes 3D points that are based on the depth pixels, each of the points has a corresponding depth pixel. A local 3D coordinate system is defined for each of the plurality of points, each local 3D coordinate system is based on the position and local orientation of the corresponding depth pixel. Feature test points are determined relative to the local coordinate system for each of the points. The feature test points are transformed from the local 3D coordinate system to the 2D image coordinate system for each of the feature test points. An object is identified in the depth map based on the transformed feature test points.
  • This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 depicts one embodiment of a target detection and tracking system tracking a user.
  • FIG. 2 depicts one embodiment of a target detection and tracking system.
  • FIG. 3A is a flowchart of one embodiment of a process of training a machine learning classifier using invariant features.
  • FIG. 3B is a flowchart that describes a process of using invariant features to identify objects using computer vision.
  • FIG. 4A depicts a depth map of an object for which in-plane local orientation of depth pixels has been estimated based on edges, in accordance with one embodiment.
  • FIG. 4B depicts a depth map of an object for which in-plane local orientation of depth pixels has been estimated based on edges, in accordance with one embodiment.
  • FIG. 4C is a flowchart of one embodiment of a process of assigning angles to depth pixels based on edges.
  • FIG. 4D depicts a depth map of an object for which in-plane local orientation of depth pixels has been estimated based on medial axes, in accordance with one embodiment.
  • FIG. 4E depicts a depth map of an object for which in-plane local orientation of depth pixels has been estimated based on medial axes, in accordance with one embodiment.
  • FIG. 4F is a flowchart of one embodiment of a process of assigning angles to depth pixels based on medial axes.
  • FIG. 5 is a flowchart of one embodiment of a process estimating local orientation of depth pixels for out-of-plane orientation.
  • FIG. 6A and FIG. 6B depict different rotations of a point cloud model with one embodiment of a local coordinate system.
  • FIG. 7 depicts a 2D image coordinate system and a 3D local coordinate system used in various embodiments, with a corresponding feature window in each coordinate system.
  • FIG. 8 is a flowchart of one embodiment of a process of establishing a local in-plain and/or out-of-plane orientation for a depth pixel.
  • FIG. 9 illustrates an example of a computing environment in accordance with embodiments of the present disclosure.
  • FIG. 10 illustrates an example of a computing environment in accordance with embodiments of the present disclosure.
  • DETAILED DESCRIPTION
  • Technology is described for developing and using features that may be used to automatically identify objects using computer vision. The features may be rotation invariant. The features may also be translation invariant and/or scale invariant. In one embodiment, the features are in-plane rotation invariant. In one embodiment, the features are out-of-plane rotation invariant. In one embodiment, the features are both in-plane and out-of-plane rotation invariant. By being invariant to transformation such as rotation, the training data requirements and the memory and processing requirements of the classifier can be reduced without adversely affecting test accuracy.
  • In some embodiments, the invariant features are used in a motion capture system having a capture device. For example, rotation invariant features may be used to identify a user's hand such that the hand can be tracked. One example application is to determine gestures made by the user to allow the user to interact with the system. Therefore, an example motion capture system will be described. However, it will be understood that technology described herein is not limited to a motion capture system.
  • FIG. 1 depicts an example of a motion capture system 10 in which a person interacts with an application. The motion capture system 10 includes a display 96, a capture device 20, and a computing environment or apparatus 12. The capture device 20 may include an image camera component 22 having a light transmitter 24, light receiver 25, and a red-green-blue (RGB) camera 28. In one embodiment, the light transmitter 24 emits a collimated light beam. Examples of collimated light include, but are not limited to, Infrared (IR) and laser. In one embodiment, the light transmitter 24 is an LED. Light that reflects off from an object 8 in the field of view is detected by the light receiver 25.
  • A user, also referred to as a person or player, stands in a field of view 6 of the capture device 20. Lines 2 and 4 denote a boundary of the field of view 6. In this example, the capture device 20, and computing environment 12 provide an application in which an avatar 97 on the display 96 track the movements of the object 8 (e.g., a user). For example, the avatar 97 may raise an arm when the user raises an arm. The avatar 97 is standing on a road 98 in a 3-D virtual world. A Cartesian world coordinate system may be defined which includes a z-axis which extends along the focal length of the capture device 20, e.g., horizontally, a y-axis which extends vertically, and an x-axis which extends laterally and horizontally. Note that the perspective of the drawing is modified as a simplification, as the display 96 extends vertically in the y-axis direction and the z-axis extends out from the capture device 20, perpendicular to the y-axis and the x-axis, and parallel to a ground surface on which the user stands.
  • Generally, the motion capture system 10 is used to recognize, analyze, and/or track an object. Invariant features (e.g., rotation invariant) that are developed in accordance to embodiments can be used in the motion capture system 10. The computing environment 12 can include a computer, a gaming system or console, or the like, as well as hardware components and/or software components to execute applications.
  • The capture device 20 may include a camera which is used to visually monitor one or more objects 8, such as the user, such that gestures and/or movements performed by the user may be captured, analyzed, and tracked to perform one or more controls or actions within an application, such as animating an avatar or on-screen character or selecting a menu item in a user interface (UI). A gesture may be dynamic, comprising a motion, such as mimicking throwing a ball. A gesture may be a static pose, such as holding one's forearms crossed. A gesture may also incorporate props, such as swinging a mock sword.
  • Some movements of the object 8 may be interpreted as controls that may correspond to actions other than controlling an avatar. For example, in one embodiment, the player may use movements to end, pause, or save a game, select a level, view high scores, communicate with a friend, and so forth. The player may use movements to select the game or other application from a main user interface, or to otherwise navigate a menu of options. Thus, a full range of motion of the object 8 may be available, used, and analyzed in any suitable manner to interact with an application.
  • The person can hold an object such as a prop when interacting with an application. In such embodiments, the movement of the person and the object may be used to control an application. For example, the motion of a player holding a racket may be tracked and used for controlling an on-screen racket in an application which simulates a tennis game. In another example embodiment, the motion of a player holding a toy weapon such as a plastic sword may be tracked and used for controlling a corresponding weapon in the virtual world of an application which provides a pirate ship.
  • The motion capture system 10 may further be used to interpret target movements as operating system and/or application controls that are outside the realm of games and other applications which are meant for entertainment and leisure. For example, virtually any controllable aspect of an operating system and/or application may be controlled by movements of the object 8.
  • The motion capture system 10 may be connected to an audiovisual device such as the display 96, e.g., a television, a monitor, a high-definition television (HDTV), or the like, or even a projection on a wall or other surface, that provides a visual and audio output to the user. An audio output can also be provided via a separate device. To drive the display, the computing environment 12 may include a video adapter such as a graphics card and/or an audio adapter such as a sound card that provides audiovisual signals associated with an application. The display 96 may be connected to the computing environment 12 via, for example, an S-Video cable, a coaxial cable, an HDMI cable, a DVI cable, a VGA cable, or the like.
  • FIG. 2 illustrates one embodiment of a target detection and tracking system 10 including a capture device 20 and computing environment 12 that may be used to recognize human and non-human targets in a capture area (with or without special sensing devices attached to the subjects), uniquely identify them, and track them in three dimensional space. In one embodiment, the capture device 20 may be a depth camera (or depth sensing camera) configured to capture video with depth information including a depth map that may include depth values via any suitable technique including, for example, time-of-flight, structured light, stereo image, or the like. In one embodiment, the capture device 20 may include a depth sensing image sensor. In one embodiment, the capture device 20 may organize the calculated depth information into “Z layers,” or layers that may be perpendicular to a Z-axis extending from the depth camera along its line of sight.
  • As shown in FIG. 2, the capture device 20 may include an image camera component 32. In one embodiment, the image camera component 32 may be a depth camera that may capture a depth map of a scene. The depth map may include a two-dimensional (2-D) pixel area of the captured scene where each pixel in the 2-D pixel area may represent a depth value such as a distance in, for example, centimeters, millimeters, or the like of an object in the captured scene from the camera. The image camera component 32 may be pre-calibrated to obtain estimates of camera intrinsic parameters such as focal length, principal point, lens distortion parameters etc. Techniques for camera calibration are discussed in, Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(11):1330-1334, 2000, which is hereby incorporated by reference.
  • As shown in FIG. 2, the image camera component 32 may include an IR light component 34, a three-dimensional (3-D) camera 36, and an RGB camera 38 that may be used to capture the depth map of a capture area. For example, in time-of-flight analysis, the IR light component 34 of the capture device 20 may emit an infrared light onto the capture area and may then use sensors to detect the backscattered light from the surface of one or more targets and objects in the capture area using, for example, the 3-D camera 36 and/or the RGB camera 38. In some embodiment, capture device 20 may include an IR CMOS image sensor. In some embodiments, pulsed infrared light may be used such that the time between an outgoing light pulse and a corresponding incoming light pulse may be measured and used to determine a physical distance from the capture device 20 to a particular location on the targets or objects in the capture area. Additionally, the phase of the outgoing light wave may be compared to the phase of the incoming light wave to determine a phase shift. The phase shift may then be used to determine a physical distance from the capture device to a particular location on the targets or objects.
  • In one embodiment, time-of-flight analysis may be used to indirectly determine a physical distance from the capture device 20 to a particular location on the targets or objects by analyzing the intensity of the reflected beam of light over time via various techniques including, for example, shuttered light pulse imaging.
  • In another example, the capture device 20 may use structured light to capture depth information. In such an analysis, patterned light (i.e., light displayed as a known pattern such as grid pattern or a stripe pattern) may be projected onto the capture area via, for example, the IR light component 34. Upon striking the surface of one or more targets (or objects) in the capture area, the pattern may become deformed in response. Such a deformation of the pattern may be captured by, for example, the 3-D camera 36 and/or the RGB camera 38 and analyzed to determine a physical distance from the capture device to a particular location on the targets or objects.
  • In some embodiments, two or more different cameras may be incorporated into an integrated capture device. For example, a depth camera and a video camera (e.g., an RGB video camera) may be incorporated into a common capture device. In some embodiments, two or more separate capture devices may be cooperatively used. For example, a depth camera and a separate video camera may be used. When a video camera is used, it may be used to provide target tracking data, confirmation data for error correction of target tracking, image capture, face recognition, high-precision tracking of fingers (or other small features), light sensing, and/or other functions.
  • In one embodiment, the capture device 20 may include two or more physically separated cameras that may view a capture area from different angles to obtain visual stereo data that may be resolved to generate depth information. Depth may also be determined by capturing images using a plurality of detectors that may be monochromatic, infrared, RGB, or any other type of detector and performing a parallax calculation. Other types of depth map sensors can also be used to create a depth map.
  • As shown in FIG. 2, capture device 20 may include a microphone 40. The microphone 40 may include a transducer or sensor that may receive and convert sound into an electrical signal. In one embodiment, the microphone 40 may be used to reduce feedback between the capture device 20 and the computing environment 12 in the target detection and tracking system 10. Additionally, the microphone 40 may be used to receive audio signals that may also be provided by the user to control applications such as game applications, non-game applications, or the like that may be executed by the computing environment 12.
  • The capture device 20 may include logic 42 that is in communication with the image camera component 22. The logic 42 may include a standardized processor, a specialized processor, a microprocessor, or the like that may execute instructions. The logic 42 may also include hardware such as an ASIC, electronic circuitry, logic gates, etc. In the event that the logic 42 is a processor, the processor 42 may execute instructions that may include instructions for storing profiles, receiving the depth map, determining whether a suitable target may be included in the depth map, converting the suitable target into a skeletal representation or model of the target, or any other suitable instructions.
  • It is to be understood that at least some target analysis and tracking operations may be executed by processors contained within one or more capture devices. A capture device may include one or more onboard processing units configured to perform one or more target analysis and/or tracking functions. Moreover, a capture device may include firmware to facilitate updating such onboard processing logic.
  • As shown in FIG. 2, the capture device 20 may include a memory component 44 that may store the instructions that may be executed by the processor 42, images or frames of images captured by the 3-D camera or RGB camera, user profiles or any other suitable information, images, or the like. In one example, the memory component 44 may include random access memory (RAM), read only memory (ROM), cache, Flash memory, a hard disk, or any other suitable storage component. The memory component 44 may also be referred to as a computer storage medium. As shown in FIG. 2, the memory component 44 may be a separate component in communication with the image capture component 32 and the processor 42. In another embodiment, the memory component 44 may be integrated into the processor 42 and/or the image capture component 32. In one embodiment, some or all of the components 32, 34, 36, 38, 40, 42 and 44 of the capture device 20 illustrated in FIG. 2 are housed in a single housing.
  • As shown in FIG. 2, the capture device 20 may be in communication with the computing environment 12 via a communication link 46. The communication link 46 may be a wired connection including, for example, a USB connection, a Firewire connection, an Ethernet cable connection, or the like and/or a wireless connection such as a wireless 802.11b, g, a, or n connection. The computing environment 12 may provide a clock to the capture device 20 that may be used to determine when to capture, for example, a scene via the communication link 46.
  • In one embodiment, the capture device 20 may provide the depth information and images captured by, for example, the 3-D camera 36 and/or the RGB camera 38 to the computing environment 12 via the communication link 46. The computing environment 12 may then use the depth information and captured images to, for example, create a virtual screen, adapt the user interface and control an application such as a game or word processor.
  • As shown in FIG. 2, computing environment 12 includes gestures library 192, structure data 198, gesture recognition engine 190, depth map processing and object reporting module 194, and operating system 196. Depth map processing and object reporting module 194 uses the depth maps to track the motion of objects, such as the user and other objects. To assist in the tracking of the objects, depth map processing and object reporting module 194 uses gestures library 190, structure data 198 and gesture recognition engine 190. In some embodiments, the depth map processing and object reporting module 194 uses a classifier 195 and a feature library 199 to identify objects. The feature library 199 may contain invariant features, such as rotation invariant features.
  • In one example, structure data 198 includes structural information about objects that may be tracked. For example, a skeletal model of a human may be stored to help understand movements of the user and recognize body parts. In another example, structural information about inanimate objects, such as props, may also be stored to help recognize those objects and help understand movement.
  • In one example, gestures library 192 may include a collection of gesture filters, each comprising information concerning a gesture that may be performed by the skeletal model. A gesture recognition engine 190 may compare the data captured by capture device 20 in the form of the skeletal model and movements associated with it to the gesture filters in the gesture library 192 to identify when a user (as represented by the skeletal model) has performed one or more gestures. Those gestures may be associated with various controls of an application. Thus, the computing environment 12 may use the gesture recognition engine 190 to interpret movements of the skeletal model and to control operating system 196 or an application based on the movements.
  • In one embodiment, depth map processing and object reporting module 194 will report to operating system 196 an identification of each object detected and the position and/or orientation of the object for each frame. Operating system 196 will use that information to update the position or movement of an object (e.g., an avatar) or other images in the display or to perform an action on the provided user-interface.
  • FIG. 3A is a flowchart of one embodiment of a process 350 of training a machine learning classifier using invariant features. The features may be invariant to any combination of rotation, translation, and scaling. Rotation invariant features include in-plane and/or out-of-plane invariance. Process 350 may involve use of a capture device 20. The process 350 may create the machine learning classifier that is later used at run time to identify objects.
  • In step 352, one or more example depth maps (or depth images) are accessed. These images may have been captured by a capture device 20. These depth maps may be labeled such that each depth pixel has been classified, for instance manually, or procedurally using computer generated imagery (CGI). For example, each depth pixel may be manually or procedurally classified as being part of a finger, hand, torso, specific segment of a body, etc. The labeling of the depth pixels may involve a person studying the depth map and assigning a label to each pixel, or assigning a label to a group of pixels. The labels might instead be continuous in a regression problem. For example, one might label each pixel with a distance to nearby body joints. Note that because the process 350 may use rotation invariant features to train the classifier, the number of example depth maps may be kept fairly low. For example, it may not be necessary to provide example images which show a hand (or other object) in a wide variety of rotations.
  • In step 354, canonical features are computed using an invariant feature transform. Briefly, each labeled example image may be processed in order to extract rotation-invariant features. In one embodiment, a local coordinate system is defined for any given pixel using a combination of in-plane and out-of-plane orientation estimates, and depth. This local coordinate system may be used to transform a feature window prior to computing the features to achieve rotation invariance. The result of step 354 may be a set of canonical features. Step 354 will be discussed in more detail with respect to FIG. 3B. In step 356, class labels (or continuous regression labels) are assigned to corresponding features based on the pixel labels in the example images.
  • In step 358, the canonical features and corresponding labels are passed to a machine learning classification system to train a classifier 195. Note that this is performed after the transformation of step 354. Therefore, the features may be rotation invariant. If step 354 determined both in-plane and out-of-plane orientations, then the features may be both in-plane and out-of-plane invariant. If step 354 determined only in-plane orientations, then the features may be in-plane rotation invariant. If step 354 determined only out-of-plane orientations, then the features may be out-of-plane rotation invariant. The classifier 195 may be used at run-time to classify rotationally-normalized features extracted from new input images. The features may also be invariant to translation and/or scaling. In some embodiments, features that are determined to be useful at identifying objects are saved, such that they may be stored in a feature library 199 for use at run time.
  • FIG. 3B is a flowchart that describes a process 300 of using invariant features to identify objects using computer vision. The features may be rotation invariant. Rotation invariant features include in-plane rotation invariant, out-of-plane rotation invariant, or both. The features may also be invariant to translation and/or scaling. Process 300 may be performed when a user is interacting with a motion capture system 10. Thus, process 300 could be used in a system such as depicted in FIG. 1 or 2. Process 300 may be used in a wide variety of other computer vision scenarios.
  • In step 302, a depth map is accessed. The capture device 20 may be used to capture the depth map. The depth map may include depth pixels. The depth map may be associated with an image coordinate system. For example, each depth pixel may have two coordinates (u, v) and a depth value. The depth map may be considered to be in a plane that is defined by the two coordinates (u, v). This plane may be based on the orientation of the depth camera and may be referred to herein as an imaging plane. If an object in the camera's field of view moves, it may be described as moving in-plane, out-of-plane or both. For example, rotating movement in the u, v plane (with points on the object retaining their depth values) may be referred to as in-plane rotation (axis of rotation is orthogonal to the u, v plane). Rotating movement that causes changes in depth values at different rates for different points on the object may be referred to as out-of-plane rotation. For example, rotation of a hand with the palm facing the camera is one example of in-plane rotation. Rotation of a hand with the thumb pointing towards and then away from the camera is one example of out-of-plane rotation.
  • In step 304, the depth map is filtered. In one embodiment, the depth map may be undistorted to remove the distortion effects from the lens. In other embodiments, upon receiving the depth map, the depth map may be down-sampled to a lower processing resolution such that the depth map may be more easily used and/or more quickly processed with less computing overhead. Additionally, one or more high-variance and/or noisy depth values may be removed and/or smoothed from the depth map and portions of missing and/or removed depth information may be filled in and/or reconstructed.
  • In step 306, the acquired depth map may be processed to distinguish foreground pixels from background pixels. Foreground pixels may be associated with some object (or objects) of interest to be analyzed. As used herein, the term “background” is used to describe anything in an image that is not part of the one objects of interest. For ease of discussion, a single object will be referred to when discussing process 300. Process 300 analyzes pixels in that object of interest. These pixels will be referred to as a subset of the pixels in the depth map.
  • Steps 308-316 describe processing individual pixels associated with the object of interest. In general, these steps involve performing an invariant feature transform. For example, this may be a rotation invariant transform. The transform may also be invariant to translation and/or scale. Note that steps 308-316 are one embodiment of step 354 from FIG. 3A.
  • In step 308, a determination is made whether there are more pixels in the subset to process. If so, processing continues with step 310 with one of the depth pixels. In step 310, a local orientation of the depth pixel is estimated. In one embodiment, the local orientation is an in-plane orientation. In one embodiment, the local orientation is an out-out-plane orientation. In one embodiment, the local orientation is both an in-plane orientation and an out-of-plane orientation. Further details of estimating a local orientation are discussed below.
  • In step 312, a local coordinate system is defined for the depth pixel. In some embodiments, the local coordinate system is a 3D coordinate system. The local coordinate system is based on the local orientation of the depth pixel. For example, if the user's hand moves, rotates, etc., then the local coordinate system moves with the hand. Further details of defining a local coordinate system are discussed below.
  • In step 314, a feature region is defined relative to the local coordinate system for the presently selected depth pixel. For example, a feature window is defined with its center at the depth pixel. One or more feature test points, feature test rectangles, Haar wavelets, or other such features may be defined based on the geometry of the feature window.
  • In step 316, the feature region is transformed from the local coordinate system to the image coordinate system. Further details of performing the transform are discussed below. Note that this may involve a transformation from the 3D space of the local coordinate system to a 2D space of the depth map.
  • Processing then returns to step 308 to determine if there are more depth pixels to analyze. If not, then processing continues at step 318. In step 318, the transformed feature regions are used to attempt to identify one or more objects in the depth map. For example, an attempt is made to identify a user's hand. This attempt may include classifying each pixel. For example, each pixel may be assigned a probability that it is part of a hand, head, arm, certain segment of an arm, etc.
  • In one embodiment, a decision tree is used to classify pixels. Such analysis can determine a best-guess of a target assignment for that pixel and the confidence that the best-guess is correct. In some embodiments, the best-guess may include a probability distribution over two or more possible targets, and the confidence may be represented by the relative probabilities of the different possible targets. In other embodiments the best-guess may include a spatial distribution over 3D offsets to body or hand joint positions. At each node of a decision tree, an observed depth value comparison between two pixels is made, and, depending on the result of the comparison, a subsequent depth value comparison between two pixels is made at the child node of the decision tree. The result of such comparisons at each node determines the pixels that are to be compared at the next node. The terminal nodes of each decision tree results in a target classification or regression with associated confidence.
  • In some embodiments, subsequent decision trees may be used to iteratively refine the best-guess of the one or more target assignments for each pixel and the confidence that the best-guess is correct. For example, once the pixels have been classified with the first classifier tree (based on neighboring depth values), a refining classification may be performed to classify each pixel by using a second decision tree that looks at the previous classified or regressed pixels and/or depth values. A third pass may also be used to further refine the classification or regression of the current pixel by looking at the previous classified or regressed pixels and/or depth values. It is to be understood that virtually any number of iterations may be performed, with fewer iterations resulting in less computational expense and more iterations potentially offering more accurate classifications or regressions, and/or confidences.
  • In some embodiments, the decision trees may have been constructed during a training mode in which the example images were analyzed to determine the questions (i.e., tests) that can be asked at each node of the decision trees in order to produce accurate pixel classifications. In one embodiment, foreground pixel assignment is stateless, meaning that the pixel assignments are made without reference to prior states (or prior image frames). One example of a stateless process for assigning probabilities that a particular pixel or group of pixels represents one or more objects is the Exemplar process. The Exemplar process uses a machine-learning approach that takes a depth map and classifies each pixel by assigning to each pixel a probability distribution over the one or more objects to which it could correspond. For example, a given pixel, which is in fact a tennis racquet, may be assigned a 70% chance that it belongs to a tennis racquet, a 20% chance that it belongs to a ping pong paddle, and a 10% chance that it belongs to a right arm. Further details of using decision trees are discussed in US Patent Application Publication 2010/0278384, titled “Human Body Pose Estimation,” by Shotton et al., published on Nov. 4, 2010, which is hereby incorporated by reference. Note that it is not required that decision trees be used. Another technique that may be used to classify pixels is a Support Vector Machine (SVM). Step 318 may include using a classifier that was developed during a training session such as that of FIG. 3A.
  • As discussed above, part of step 354 (of both FIGS. 3A and 3B) is to estimate a local orientation of depth pixels. FIGS. 4A-4F will be referred to in order to discuss estimating a location local orientation of depth pixels with respect to the (u, v) coordinate system of the depth map. In these examples, the depth values are not factored in to the local orientation. Therefore, this may be considered to be an in-plane orientation.
  • FIG. 4A depicts a depth map of an object for which in-plane local orientation of depth pixels has been estimated, in accordance with one embodiment. Each depth pixel is assigned a value between 0-360 degrees, in this embodiment. The assignment is made such that if the object is rotated in-plane (e.g., in the (u, v) image plane) the depth pixel will have the same local orientation, or at least very close to the same value. For example, the depth pixel may have the same angle assigned to it regardless of rotation in the (u, v) image plane.
  • Note that the angle is with respect to any convenient reference axis. As one example, the depth map has a u-axis and a v-axis. The angle may be with respect to either axis, or some other axis. Two example depth pixels p1, p2 are shown. Two points q1, q2 are also depicted. The point q is the nearest point on the edge of the hand to the given depth pixel. A line is depicted from p to q. The angle θ is the angle of that line to the u-axis (or more precisely to a line that runs parallel to the u-axis). Note that if the hand were to be rotated in the (u, v) plane, that the angle θ would change by the same amount for all pixels. Therefore, the angle θ serves as a way of describing a local orientation of a depth pixel that is in-plane rotation invariant.
  • FIG. 4B depicts a depth map of an object for which in-plane local orientation of depth pixels has been estimated, in accordance with one embodiment. This embodiment uses a different technique for determining the angle than the embodiment of FIG. 4A. In this embodiment, the angle is based on a tangent to the object at a point q. The two example depth pixels p1, p2 and the two points q1, q2 are depicted. The angle θ1 a for point p1 is the tangent to the hand at q1. Similar reasoning applies for angle θ2 a. Note that if the hand were to be rotated in the (u, v) plane, that the angle θ would change by the same amount for all pixels. Therefore, the angle θ serves as a way of describing a local orientation of a depth pixel that is in-plane rotation invariant.
  • In FIGS. 4A and 4B, for the purpose of illustration, the depth pixels are grouped into those with angles between 60-180, those between 180-300, and those between 300-60. In actual practice, no such grouping is required. Also, note that it is not required that the angle assignment be between 0-360 degrees. For example, it could be between −180 to +180 degrees, or another scheme. It may also be between 0-180, in which case the feature transform is rotationally invariant only up to a two-way ambiguity.
  • FIG. 4C is a flowchart of one embodiment of a process 450 of assigning an angle to a depth pixel. The process 450 may be performed once for each depth pixel in an object of interest. In process 450, the angle is determined relative to the nearest edge of the object. Therefore, process 450 may be used for either embodiment of FIG. 4A or 4B. Note that process 450 is one embodiment of estimating local orientation of step 310. In particular, process 450 is one embodiment of estimating in-plane local orientation.
  • In step 452, edges of the object are detected. The edge is one example of a reference line of the object of interest. A variety of edge detection techniques may be used. Since edge detection is well-known by those of ordinary skill in the art it will not be discussed in detail. Note that edge detection could be performed in a step prior to step 310.
  • In step 456, the closest edge to the present depth pixel is determined. For example, q1 in FIG. 4A or 4B is determined as the closest point on the edge of the hand to depth pixel p1. Likewise, q2 is determined as the closest point on the edge of the hand to depth pixel p2, when processing that depth pixel. This can be efficiently computed using, for example, distance transforms.
  • In step 458, a rotation invariant angle to assign to the depth pixel is determined. In one embodiment, the angle may be defined based on the tangent to the edge of the hand at the edge point (e.g., p1, p2). This angle is one example of a rotation invariant angle for the closest edge point. Since the closest edge point (e.g., q1) is associated to the depth pixel (p1), the angle may also be considered to be one example of a rotation invariant angle for the depth pixel. As noted, any convenient reference axis may be used, such as the u-axis of the depth map. This angle is assigned to the present depth pixel. Referring to FIG. 4B, θ 1 b is assigned to p1 and θ2 b is assigned to p2.
  • In one embodiment, the angle may be defined based on the technique shown in FIG. 4A. As noted, any convenient reference axis may be used, such as the u-axis of the depth map. In this case, the angle is defined as the angle between the u-axis and the line between p and q. This angle is assigned to the present depth pixel. Referring to FIG. 4B, θ1 is assigned to p1 and θ2 is assigned to p2. Process 450 continues until angles have been assigned to all depth pixels of interest.
  • After all depth pixels have been assigned an angle, smoothing of the results may be performed in step 460. For example, the angle of each depth pixel may be compared to its neighbors, with outliers being smoothed.
  • Another technique for estimating a local in-plane orientation of depth pixels is based on medial axes. FIG. 4D depicts a hand to discuss such an embodiment. A medial axis may be defined as a line that is roughly a mid-point between two edges. To some extent, a medial axis may serve to represent a skeletal model. FIG. 4D shows a depth pixel p3, with its closest medial axis point q3. The angle θ3 a represents a local orientation of depth pixel p3. Note that if the hand were to be rotated in the (u, v) plane, that the angle θ3 a would change by the same amount for all pixels. Therefore, the angle θ3 a serves as a way of describing a local orientation of a depth pixel that is in-plane rotation invariant. In this embodiment, the angle is defined based on the line parallel to the u-axis and the line between p and q.
  • FIG. 4E depicts a hand to discuss another embodiment for determining a rotation invariant angle. FIG. 4E shows a depth pixel p3, with its closest medial axis point q3. The angle θ3 b represents a local orientation of depth pixel p3. In this example, θ3 b is defined based on the tangent at point q3 to the medial axis. Note that if the hand were to be rotated in the (u, v) plane, that the angle θ3 b would change by the same amount for all pixels. Therefore, the angle θ3 b serves as a way of describing a local orientation of a depth pixel that is in-plane rotation invariant.
  • FIG. 4F is a flowchart of one embodiment of a process 480 of assigning angles to depth pixels. In this process 480, the angle is determined relative to the nearest medial axis of the object. Therefore, FIGS. 4D and 4E will be referred to when discussed process 480. Process 480 is one embodiment of steps 308 and 310. In particular, process 480 is one embodiment of estimating an in-plane orientation of depth pixels.
  • In step 482, medial axes of the object are determined. A medial axis may be defined based on the contour of the object. It can be implemented by iteratively eroding the boundaries of the object without allowing the object to break apart. The remaining pixels make up the medial axes. Medial axis computation is well-known by those of ordinary skill in the art it will not be discussed in detail. Example medial axes are depicted in FIGS. 4D and 4E. A medial axis is one example of a reference line in the object.
  • Next, depth pixels in the object are processed one by one. In step 486, the closest point on a medial axis to the present depth pixel is determined Referring to either FIG. 4D or 4E, point q3 may be determined to be the closest point to p3.
  • In step 488, a rotation invariant angle for the depth pixel is determined. The angle may be based on the tangent to the medial axis at point q3, as depicted in FIG. 4E. The angle may also be determined based on the technique shown in FIG. 4D. As noted, any convenient reference axis may be used, such as the u-axis of the depth map. The angle is one example of a rotation invariant angle for the point q3. Since the closest medial axis point (e.g., q3) is associated to the depth pixel (p3), the angle may also be considered to be one example of a rotation invariant angle for the depth pixel. Referring to either FIG. 4D or 4E, angle θ3 a or θ3 b is determined. This angle is assigned to the present depth pixel. The process 480 continues until angles have been assigned to all depth pixels of interest.
  • After all depth pixels have been assigned an angle, smoothing of the results is performed in step 490. For example, the angle of each depth pixel may be compared to its neighbors, with outliers being smoothed.
  • As noted, the estimate of the local pixel orientation may be an estimate of the out-of-plane orientation. FIG. 5 is a flowchart of one embodiment of a process 500 estimating local orientation of depth pixels for out-of-plane orientation. In this embodiment, the out-of-plane orientation is based on the surface normal of the object of interest at the depth pixel. Process 500 is one embodiment of steps 308-310 and will be discussed with reference to FIG. 6A.
  • In step 502, a point cloud model is developed. The point cloud model may be a 3D model in which each depth pixel in the depth map is assigned a coordinate in 3D space, for example. The point cloud may have one point for each depth pixel in the depth map, but that is not an absolute requirement. To facilitate discussion, it will be assumed that each point in the point cloud has a corresponding depth pixel in the depth map. However, note that this one-to-one correspondence is not a requirement. Herein, the term “depth point” will be used to refer to a point in the point cloud.
  • FIG. 6A depicts a point cloud model 605 of a hand and portion of an arm. The point cloud model is depicted within an (a, b, c) global coordinate system. Thus, an a-axis, b-axis, and c-axis of a global coordinate system are depicted. In some embodiments, two of the axes in the global coordinate system correspond to the u-axis and v-axis of the depth map. However, this correspondence is not a requirement. The position along the third axis in the global coordinate system may be determined based on the depth value for a depth pixel in the depth map. Note that the point cloud model 605 may be generated in another manner. Also note that using a point cloud model 605 is just one way to determine a surface normal. Other techniques could be used.
  • In step 504 of FIG. 5, a determination is made whether there are more depth pixels to process. Note that the processing here is of depth point in the point cloud 605.
  • In step 506, a surface normal is determined at the present point. By surface normal it is meant a line that is perpendicular to the surface of the object of interest. The surface normal may be determined by analyzing nearby depth points. The surface normal may be defined in terms of the (a, b, c) global coordinate system. In FIG. 6A, the surface normal is depicted as the z-axis that touches the second finger of the hand. The x-axis, y-axis, and z-axis form a local coordinate system for the pixel presently being analyzed. The local coordinate system will be further discussed below. Processing continues until a surface normal is determined for all depth points.
  • In step 508, smoothing of the surface normals is performed. Note that using surface normals is one example of how to determine a local orientation for depth pixels that may be used for out-of-plane rotation. However, other parameters could be determined. Also, as noted above, there may be one depth point in the point cloud 605 for each depth pixel in the depth map. Therefore, the assignment of surface normals to depth pixels may be straightforward. However, if such a one-to-one correspondence does not exist, a suitable calculation can be made to assign surface normals to depth pixels in the depth map. Finally, it will be understood that although the discussion of FIG. 5 was of determining a surface normal to a depth point, this is one technique for determining a local orientation of a depth pixel.
  • As noted in step 354, after determining the local orientation of depth pixels, a local coordinate system is determined for each of the depth pixels. Figures 6A and 6B show an object with a local coordinate system (labeled as x-axis, y-axis, z-axis) for one of the depth points. The local coordinate system has three perpendicular axes, in this embodiment. The origin of the local coordinate system is at one of the 3D depth points in the object of interest. One axis (e.g., z-axis) is normal to the surface of the object of interest. That is, it is the surface normal at a certain depth point. Determining the x-axis and the y-axis will be discussed below. Also note that, although for purposes of illustration the local coordinate system is depicted relative to one of the depth points in the depth cloud 605, the local coordinate system is considered to be a local coordinate system for one of the depth pixels in the depth map.
  • A feature region or window 604 is also depicted in FIGS. 6A-6B. The dashed lines are depicted to demonstrate the position of the feature window 604 relative to the local coordinate system. The feature window 604 may be used to help define features (also referred to as “feature probes”). For example, a feature probe can be defined based on the origin of the local coordinate system and some point in the feature window. Note that the feature window may be transformed to the depth map prior to using the feature probe.
  • In an embodiment in which the object is a hand, the local coordinate system moves consistently with the hand. For example, if the hand rotates, the local coordinate system rotates by a corresponding amount. Of course, the object could be any object. Thus, more generally, the local coordinate system moves consistently with the object. In some embodiments, features are defined based on the local coordinate system. Therefore, the features may be invariant to factors such as rotation, translation, scale, etc.
  • Referring now to FIG. 6B, the hand has been rotated relative to the hand of FIG. 6A. However, note that the x-axis and the y-axis are in the same position relative to the hand. The z-axis is not depicted in FIG. 6B, but it will be understood that it is still normal to the surface at the location of the depth point. The feature window 604 is also in the same position relative to the local coordinate system. Therefore, the feature window 604 is also in the same position relative to the hand. Note that this means that if a feature is defined in the local coordinate system, that the feature will automatically rotate with the hand (or other object).
  • As discussed above, in some embodiments, there is a 2D coordinate system for the depth map (with each depth pixel having a depth value) and a 3D local coordinate system for each depth pixel of interest. FIG. 7 depicts an image window 702 associated with a 2D depth image coordinate system and a corresponding window 604 in a 3D local coordinate system. The image window 702 represents a portion of the depth map. The point p(u, v, d) represents the test pixel from the depth map, where (u, v) are image coordinates and d is depth. The point q(u+λ cos(θ), v+λ sin(θ), d) represents the point of interest, also in the depth map. The point of interest could be any pixel in the depth image.
  • The arrows in the image window 702 that originate from pixel p are parallel to the u-axis and the v-axis. A line is depicted between the pixel p and the point of interest q. The angle θ is the estimated in-plane rotation, which in this example is defined as the angle between the line and a reference axis. In this example, the reference axis is the u-axis, but any reference axis could be chosen.
  • Referring back to FIG. 4A or 4B, the point p(u, v, d) might represent one of the depth pixels, such as p1. The point q might represent the nearest point on the edge of the hand, such as q1. The angle θ might represent the angle θ1 a between as sown in FIG. 4A. The angle θ might represent the angle θ1 b between the tangent to the edge of the hand at point q1 and some reference axis, as shown in FIG. 4B.
  • Referring back to FIG. 4D or 4E, the point p(u, v, d) might represent the depth pixel p3. The point q might represent the nearest point on the medial axis, q3. The angle θ might represent the angle θ3 a between as depicted in FIG. 4D. The angle θ might represent the angle θ3 b between the tangent to the medial axis at point q3 and some reference axis, as depicted in FIG. 4E. The window 604 in the local 3D coordinate system contains the point P, which corresponds to pixel p in the 2D depth map. For the sake of illustration, point P could be the point in the point cloud of FIGS. 6A and 6B from which the surface normal (z-axis) originates. Window 604 represents a feature window 604 in the local 3D coordinate system. Examples of a local coordinate system and feature window 604 were discussed with respect to FIGS. 6B and 6B.
  • A vector {right arrow over (n)}, which corresponds to the surface normal, is depicted with its tail at point P. A vector {right arrow over (V)} has its tail at point P and its head at point Q. Point Q is the point in 3D space that corresponds to point q in the 2D depth map. Vectors {right arrow over (r1)} and {right arrow over (r2)} may correspond to the x-axis and the y-axis in the local coordinate system (see, for example, FIGS. 6A-6B). Techniques for transforming between the local 3D coordinate system and the 2D image coordinate system will now be discussed. These techniques may be used for step 316.
  • The following describes a transformation from a 3D point Xw (where the first two coordinates are usually defined between [−1,1] and the 3rd coordinate is typically zero) in a canonical window into depth pixel coordinates x. Equation 1 states a general form for the transformation equation.

  • x=deHom(Φ(RSX w +{right arrow over (t)}))  Eq. 1
  • The transformation equation applies a rotation matrix R, a diagonal scaling matrix S, and a camera projection function Φ. The vector {right arrow over (t)} is a translation. The camera matrix projects from 3D into 2D.
  • In Equation 1, deHom(.) is the matrix given by
  • deHom ( [ X Y Z ] ) = [ X / Z Y / Z 1 ] = [ X / Z Y / Z ] Eq . 2
  • In order to derive the rotation matrix R and the vector {right arrow over (t)}, the following is considered. The present pixel in the depth map being examined may be defined as p(u, v, d), where (u, v) are the depth map pixel coordinates and “d” is a depth value for the depth pixel.
  • Next, some point of interest “q” relative to the present depth pixel is considered. The point of interest may be any point. One example is the closest edge point, as discussed in FIGS. 4A-4C. Another example is the closest medial axis point, as discussed in FIGS. 4D-4F. However, it will be understood that some other point of interest may be determined. Note that these points of interest may be selected such that a local orientation of the depth pixel that is in-plane rotation invariant may be determined. An estimated in-plane rotation θ is also determined, as in the examples above.
  • Furthermore, an estimated out-of-plane rotation local orientation is determined For example, the surface normal is estimated as discussed with respect to FIG. 5.
  • Additionally, window scaling (sx, sy, sz) are pre-specified, with S=diag([sx, sy, sz]). This window may be used for the feature window 604. Note that if the window scaling is defined in 3D, then the window may be given actual measurements, such that after it is projected to 2D it will scale properly. For example, the window could be defined as being 100 mm on each of three sides. When projecting back to the 2D space, the feature window 604 scales properly. Referring back to FIGS. 6A-6B, the feature window 604 was depicted in two-dimensions (x, y) for clarity. However, the feature window 604 can also be defined as a three dimensional object, using the z-axis.
  • Referring again to transformation equation (Eq. 1), Φ(.) refers to a generic camera projection function that transforms a 3D point in the camera coordinate system into a pixel homogeneous coordinate. The inverse transformation is given by Φ−1(.). The camera projection function may be used to factor in various physical properties such as focal lengths (f1, f2), principal point (c1, c2), skew coefficient (α), lens distortion parameters etc. An example of a camera projection function that does not account for lens distortion is given by Φ(X)=KX, where K is a camera matrix as shown in Equation 3. A more general camera projection function that does account for radial distortion can be used instead. Camera projection functions are well known and, therefore, will not be discussed in detail.
  • K = camera matrix = [ f 1 α f 1 c 1 0 f 2 c 2 0 0 1 ] Eq . 3
  • The rotation matrix may be computed as in Equation 4.

  • R 3×3=[{right arrow over (r 1)} {right arrow over (r 2)} {right arrow over (r 3)}]  Eq. 4
  • In Equation 4, the vector {right arrow over (r3)} may be a unitized version of the surface normal. Note that this may be the z-axis of the window 604. The vector {right arrow over (r1)} (x-axis) may be the component of {right arrow over (V)} that is orthogonal to the surface normal. Recall that {right arrow over (V)} was defined in FIG. 7 as Q−P. The vector {right arrow over (V)} may be referred to herein as an in-plane rotation-variant vector. The vector {right arrow over (r2)} (y-axis) may be computed as the cross product of {right arrow over (r3)} and {right arrow over (r2)}. The following Equations summarize the foregoing.

  • {right arrow over (r 3)}=unitize({right arrow over (n)})  Eq. 5

  • {right arrow over (r 1)}=unitize({right arrow over (V)}−({right arrow over (V)} T{right arrow over (r 3)}){right arrow over (r 3)})  Eq. 6

  • {right arrow over (r 2)}={right arrow over (r 3)}×{right arrow over (r 1)}  Eq. 7
  • The translation vector {right arrow over (t)} may be computed as in Equation 8.
  • t = P = ( Φ - 1 ( [ p 1 ] ) ) d Eq . 8
  • The vector {right arrow over (V)} may be computed as in Equations 9A-9C.
  • unitize ( ( Φ - 1 ( [ q 1 ] - [ p 1 ] ) ) d ) Eq . 9 A unitize ( Φ - 1 ( [ q - p 0 ] ) ) Eq . 9 B unitize ( Φ - 1 ( [ cos ( θ ) sin ( θ ) 0 ] ) ) Eq . 9 C
  • For a 3D feature transform, and in the absence of radial distortion, the full 3D transform may be computed as in Equations 10A and 10B.
  • T 3 × 4 = K ( [ s x 0 0 0 s y 0 0 0 s z ] [ R 3 × 3 t ] ) Eq . 10 A x = [ u v 1 ] = deHom ( T 3 × 4 [ x w y w z w 1 ] ) Eq . 10 B
  • For a 2D feature in the canonical XY-plane, the direct transformation from canonical coordinates (xw, yw) in a [−1,1] window to depth pixel coordinates in the depth map may be determined by pre-computing the homography transformation H as Equation 11A and then calculating x, as in Equation 11B.
  • H 3 × 3 = K ( [ s 1 r 1 s 2 r 2 t ] ) Eq . 11 A x = [ u v 1 ] = deHom ( H [ x w y w 1 ] ) Eq . 11 B
  • Performing the transform in the other direction may be as in Equation 12.
  • [ x w y w 1 ] = deHom ( H - 1 [ u v 1 ] ) Eq . 12
  • As noted above, the local orientation may be based on in-plane, out-of-plane, or both. FIG. 8 is a flowchart of one embodiment of a process 800 of establishing a local orientation for a depth pixel factoring in the different possibilities. Process 800 may be repeated for each depth pixel for which a local orientation is to be determined.
  • In step 802, a determination is made whether an estimate of a local in-plane orientation is to be made. If so, then the in-plane estimate is made in step 804. Techniques for determining a local in-plane orientation have been discussed with respect to FIGS. 4A-4F. As noted, the estimate of the in-plane orientation may be an angle with respect to some reference axis in the depth map (or 2D image coordinate system). If the in-plane estimate is not to be made, then the angle θ may be set to a default value in step 806. As one example, the angle θ may be set to 0 degrees. Therefore, all depth pixels will have the same angles.
  • Note that regardless of whether or not the local in-plane estimate is made, the processing to determine the local coordinate system may be the same. For example, referring to Equations above that use the angle θ, the calculations may be performed in a similar manner by using the default value for θ.
  • In step 808, a determination is made whether an estimate of a local out-of-plane orientation is to be made. If so, then the out-of-plane estimate is made in step 810. Note that if the in-plane orientation was not determined, then the out-of-plane orientation is determined in step 810. Techniques for determining a local out-of-plane orientation have been discussed with respect to FIG. 5. As noted, the estimate of the out-of-plane orientation may be a surface normal of the object at a given depth pixel or point in a point cloud model. Thus, the output of the estimate may be a vector.
  • If the out-of-plane estimate is not to be made, then the vector may be set to a default value in step 812. As one example, the vector may be set to being parallel to the optical axis of the camera. Therefore, all depth pixels will have the same vectors.
  • Note that regardless of whether or not the local out-of-plane estimate is made, the processing to determine the local coordinate system may be the same. For example, referring to Equations above that use the vector {right arrow over (n)}, the calculations may be performed in a similar manner by using the default value for vector {right arrow over (n)}.
  • FIG. 9 illustrates an example of a computing environment including a multimedia console (or gaming console) 100 that may be used to implement the computing environment 12 of FIG. 2. The capture device 20 may be coupled to the computing environment. As shown in FIG. 9, the multimedia console 100 has a central processing unit (CPU) 101 having a level 1 cache 102, a level 2 cache 104, and a flash ROM (Read Only Memory) 106. The level 1 cache 102 and a level 2 cache 104 temporarily store data and hence reduce the number of memory access cycles, thereby improving processing speed and throughput. The CPU 101 may be provided having more than one core, and thus, additional level 1 and level 2 caches 102 and 104. The flash ROM 106 may store executable code that is loaded during an initial phase of a boot process when the multimedia console 100 is powered ON.
  • A graphics processing unit (GPU) 108 and a video encoder/video codec (coder/decoder) 114 form a video processing pipeline for high speed and high resolution graphics processing. Data is carried from the graphics processing unit 108 to the video encoder/video codec 114 via a bus. The video processing pipeline outputs data to an A/V (audio/video) port 140 for transmission to a television or other display. A memory controller 110 is connected to the GPU 108 to facilitate processor access to various types of memory 112, such as, but not limited to, a RAM (Random Access Memory).
  • The multimedia console 100 includes an I/O controller 120, a system management controller 122, an audio processing unit 123, a network interface controller 124, a first USB host controller 126, a second USB controller 128 and a front panel I/O subassembly 130 that are preferably implemented on a module 118. The USB controllers 126 and 128 serve as hosts for peripheral controllers 142(1)-142(2), a wireless adapter 148, and an external memory device 146 (e.g., flash memory, external CD/DVD ROM drive, removable media, etc.). The network interface 124 and/or wireless adapter 148 provide access to a network (e.g., the Internet, home network, etc.) and may be any of a wide variety of various wired or wireless adapter components including an Ethernet card, a modem, a Bluetooth module, a cable modem, and the like.
  • System memory 143 is provided to store application data that is loaded during the boot process. A media drive 144 is provided and may comprise a DVD/CD drive, hard drive, or other removable media drive, etc. The media drive 144 may be internal or external to the multimedia console 100. Application data may be accessed via the media drive 144 for execution, playback, etc. by the multimedia console 100. The media drive 144 is connected to the I/O controller 120 via a bus, such as a Serial ATA bus or other high speed connection (e.g., IEEE 1394).
  • The system management controller 122 provides a variety of service functions related to assuring availability of the multimedia console 100. The audio processing unit 123 and an audio codec 132 form a corresponding audio processing pipeline with high fidelity and stereo processing. Audio data is carried between the audio processing unit 123 and the audio codec 132 via a communication link. The audio processing pipeline outputs data to the A/V port 140 for reproduction by an external audio player or device having audio capabilities.
  • The front panel I/O subassembly 130 supports the functionality of the power button 150 and the eject button 152, as well as any LEDs (light emitting diodes) or other indicators exposed on the outer surface of the multimedia console 100. A system power supply module 136 provides power to the components of the multimedia console 100. A fan 138 cools the circuitry within the multimedia console 100.
  • The CPU 101, GPU 108, memory controller 110, and various other components within the multimedia console 100 are interconnected via one or more buses, including serial and parallel buses, a memory bus, a peripheral bus, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures can include a Peripheral Component Interconnects (PCI) bus, PCI-Express bus, etc.
  • When the multimedia console 100 is powered ON, application data may be loaded from the system memory 143 into memory 112 and/or caches 102, 104 and executed on the CPU 101. The application may present a graphical user interface that provides a consistent user experience when navigating to different media types available on the multimedia console 100. In operation, applications and/or other media contained within the media drive 144 may be launched or played from the media drive 144 to provide additional functionalities to the multimedia console 100.
  • The multimedia console 100 may be operated as a standalone system by simply connecting the system to a television or other display. In this standalone mode, the multimedia console 100 allows one or more users to interact with the system, watch movies, or listen to music. However, with the integration of broadband connectivity made available through the network interface 124 or the wireless adapter 148, the multimedia console 100 may further be operated as a participant in a larger network community.
  • When the multimedia console 100 is powered ON, a set amount of hardware resources are reserved for system use by the multimedia console operating system. These resources may include a reservation of memory (e.g., 16 MB), CPU and GPU cycles (e.g., 5%), networking bandwidth (e.g., 8 kbs), etc. Because these resources are reserved at system boot time, the reserved resources do not exist from the application's view.
  • In particular, the memory reservation preferably is large enough to contain the launch kernel, concurrent system applications and drivers. The CPU reservation is preferably constant such that if the reserved CPU usage is not used by the system applications, an idle thread will consume any unused cycles.
  • With regard to the GPU reservation, lightweight messages generated by the system applications (e.g., popups) are displayed by using a GPU interrupt to schedule code to render popup into an overlay. The amount of memory required for an overlay depends on the overlay area size and the overlay preferably scales with screen resolution. Where a full user interface is used by the concurrent system application, it is preferable to use a resolution independent of application resolution. A scaler may be used to set this resolution such that the need to change frequency and cause a TV resynch is eliminated.
  • After the multimedia console 100 boots and system resources are reserved, concurrent system applications execute to provide system functionalities. The system functionalities are encapsulated in a set of system applications that execute within the reserved system resources described above. The operating system kernel identifies threads that are system application threads versus gaming application threads. The system applications are preferably scheduled to run on the CPU 101 at predetermined times and intervals in order to provide a consistent system resource view to the application. The scheduling is to minimize cache disruption for the gaming application running on the console.
  • When a concurrent system application requires audio, audio processing is scheduled asynchronously to the gaming application due to time sensitivity. A multimedia console application manager controls the gaming application audio level (e.g., mute, attenuate) when system applications are active.
  • Input devices (e.g., controllers 142(1) and 142(2)) are shared by gaming applications and system applications. The input devices are not reserved resources, but are to be switched between system applications and the gaming application such that each will have a focus of the device. The application manager preferably controls the switching of input stream, without knowledge the gaming application's knowledge and a driver maintains state information regarding focus switches. In some embodiments, the capture device 20 of FIG. 2 may be an additional input device to multimedia console 100.
  • FIG. 10 illustrates another example of a computing environment that may be used to implement the computing environment 12 of FIG. 2. The capture device 20 may be coupled to the computing environment. The computing environment of FIG. 10 is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality of the presently disclosed subject matter. Neither should the computing environment 12 of FIG. 2 be interpreted as having any dependency or requirement relating to any one or combination of components illustrated in the exemplary operating environment of FIG. 10. In some embodiments, the various depicted computing elements may include circuitry configured to instantiate specific aspects of the present disclosure. For example, the term circuitry used in the disclosure can include specialized hardware components configured to perform function(s) by firmware or switches. In other examples, the term circuitry can include a general-purpose processing unit, memory, etc., configured by software instructions that embody logic operable to perform function(s). In embodiments where circuitry includes a combination of hardware and software, an implementer may write source code embodying logic and the source code can be compiled into machine readable code that can be processed by the general purpose processing unit.
  • In FIG. 10, the computing system 220 comprises a computer 241, which typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 241 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media.
  • The computer 241 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example, FIG. 10 illustrates a hard disk drive 238 that reads from or writes to non-removable, nonvolatile magnetic media, a magnetic disk drive 239 that reads from or writes to a removable, nonvolatile magnetic disk 254, and an optical disk drive 240 that reads from or writes to a removable, nonvolatile optical disk 253 such as a CD ROM or other optical media. Other removable/non-removable, volatile/nonvolatile computer storage media that can be used in the exemplary operating environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, solid state RAM, solid state ROM, and the like. The hard disk drive 238 is typically connected to the system bus 221 through a non-removable memory interface such as interface 234, and magnetic disk drive 239 and optical disk drive 240 are typically connected to the system bus 221 by a removable memory interface, such as interface 235.
  • A basic input/output system 224 (BIOS), containing the basic routines that help to transfer information between elements within computer 241, such as during start-up, is typically stored in ROM 223. RAM 260 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 259. By way of example, and not limitation, FIG. 10 illustrates operating system 225, application programs 226, other program modules 227, and program data 228.
  • The drives and their associated computer storage media discussed above and illustrated in FIG. 10, provide storage of computer readable instructions, data structures, program modules and other data for the computer 241. In FIG. 10, for example, hard disk drive 238 is illustrated as storing operating system 258, application programs 257, other program modules 256, and program data 255. Note that these components can either be the same as or different from operating system 225, application programs 226, other program modules 227, and program data 228. Operating system 258, application programs 257, other program modules 256, and program data 255 are given different numbers here to illustrate that, at a minimum, they are different copies. A user may enter commands and information into the computer 241 through input devices such as a keyboard 251 and pointing device 252, commonly referred to as a mouse, trackball or touch pad. Other input devices (not shown) may include a microphone, joystick, game pad, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 259 through a user input interface 236 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). The cameras 34, 36 and capture device 20 of FIG. 2 may define additional input devices for the computer 241. A monitor 242 or other type of display device is also connected to the system bus 221 via an interface, such as a video interface 232. In addition to the monitor, computers may also include other peripheral output devices such as speakers 244 and printer 243, which may be connected through a output peripheral interface 233.
  • The computer 241 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 246. The remote computer 246 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 241, although only a memory storage device 247 has been illustrated in FIG. 5. The logical connections depicted in FIG. 5 include a local area network (LAN) 245 and a wide area network (WAN) 249, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets and the Internet.
  • When used in a LAN networking environment, the computer 241 is connected to the LAN 245 through a network interface or adapter 237. When used in a WAN networking environment, the computer 241 typically includes a modem 250 or other means for establishing communications over the WAN 249, such as the Internet. The modem 250, which may be internal or external, may be connected to the system bus 221 via the user input interface 236, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 241, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation, FIG. 10 illustrates remote application programs 248 as residing on memory device 247. It will be appreciated that the network connections shown are exemplary and other means of establishing a communications link between the computers may be used.
  • The disclosed technology is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with the technology include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • The disclosed technology may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, software and program modules as described herein include routines, programs, objects, components, data structures, and other types of structures that perform particular tasks or implement particular abstract data types. Hardware or combinations of hardware and software may be substituted for software modules as described herein.
  • The disclosed technology may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
  • Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (20)

1. A method, comprising:
accessing a depth map that includes a plurality of depth pixels, the depth map is associated with an image coordinate system having a plane;
estimating a local orientation for each depth pixel in a subset of the depth pixels, the local orientation is one or both of an in-plane orientation and an out-out-plane orientation relative to the plane of the image coordinate system;
defining a local coordinate system for each of the depth pixels in the subset, each local coordinate system is based on the local orientation of the corresponding depth pixel;
defining a feature region relative to the local coordinate system for each of the depth pixels in the subset;
transforming the feature region for each of the depth pixels in the subset from the local coordinate system to the image coordinate system; and
using the transformed feature regions to process the depth map.
2. The method of claim 1, wherein the using the transformed feature regions to process the depth map includes:
classifying features from the depth map based on the transformed feature regions.
3. The method of claim 1, wherein the using the transformed feature regions to process the depth map includes:
training a machine learning classifier based on the transformed feature regions.
4. The method of claim 1, wherein the estimating a local orientation includes estimating an in-plane rotation, the in-plane rotation is a rotation invariant angle at a point of interest in the depth map.
5. The method of claim 1, wherein the estimating a local orientation includes estimating an out-of-plane rotation, the out-of-plane rotation is defined by a surface normal to an object in the depth map at a test depth pixel.
6. The method of claim 5, wherein the defining a local coordinate system for each of the depth pixels in the subset includes defining a z-axis of the local coordinate system as being the surface normal.
7. The method of claim 1, wherein the defining a local coordinate system for each of the depth pixels in the subset includes:
determining a vector in 3D space from a test depth pixel to a point of interest;
determining a surface normal to an object in the depth map; and
establishing an x-axis that is the component of the vector in 3D space that is orthogonal to the surface normal.
8. The method of claim 7, wherein the defining a local coordinate system for each of the depth pixels in the subset includes establishing a y-axis as the cross product of the z-axis and the x-axis.
9. A system comprising:
a depth camera for generating depth maps that includes a plurality of depth pixels, each pixel having a depth value, each depth map is associated with a 2D image coordinate system;
logic coupled to the depth camera, the logic is operable to:
access a depth map from the depth camera, the depth map is associated with an image coordinate system having a plane;
estimate a local orientation for each depth pixel in a subset of the depth pixels, the local orientation includes one or both of an in-plane orientation that is in the plane of the 2D image coordinate system and an out-out-plane orientation that is out-of-the plane of the 2D image coordinate system;
define a local 3D coordinate system for each of the depth pixels in the subset, each local 3D coordinate system is based on the local orientation of the corresponding depth pixel;
define a feature region relative to the local coordinate system for each of the depth pixels in the subset;
transform the feature region for each of the depth pixels in the subset from the local 3D coordinate system to the 2D image coordinate system; and
identify an object in the depth map based on the transformed feature regions.
10. The system of claim 9, wherein the estimating a local orientation includes estimating an in-plane rotation, determining the in-plane rotation includes:
determining a closest point between the test depth pixel and a reference line of the object; and
determining a rotation invariant angle for the closest point.
11. The system of claim 9, wherein the estimating a local orientation includes estimating an out-of-plane rotation, the out-of-plane rotation is defined by a surface normal to an object in the depth map at a test depth pixel.
12. The system of claim 11, wherein the defining a local coordinate system for each of the depth pixels in the subset includes defining a z-axis of the local coordinate system as being the surface normal.
13. The system of claim 9, wherein the defining a local coordinate system for each of the depth pixels in the subset includes:
determining a vector in 3D space from a test depth pixel to a point of interest;
determining a surface normal to an object in the depth map; and
establishing an x-axis that is the component of the vector in 3D space that is orthogonal to the surface normal.
14. The system of claim 9, wherein the defining a local coordinate system for each of the depth pixels in the subset includes establishing a y-axis as the cross product of the z-axis and the x-axis.
15. A computer readable storage medium having instructions stored thereon which, when executed on a processor, cause the processor to perform the steps of:
accessing a depth map that includes an array of depth pixels, each depth pixel has a depth value, the depth map is associated with a 2D image coordinate system;
estimating a local orientation for each depth pixel in a subset of the depth pixels, the local orientation includes in-plane orientation that is in the plane of the 2D image coordinate system and an out-of-plane orientation that is out-of-the plane of the 2D image coordinate system;
determining a 3D model for the depth map, the model includes a plurality of 3D points that are based on the depth pixels, each of the points has a corresponding depth pixel;
defining a local 3D coordinate system for each of the plurality of points, each local 3D coordinate system is based on the position and the local orientation of the corresponding depth pixel;
defining feature test points relative to the local coordinate system for each of the points;
transforming the feature test points from the local 3D coordinate system to the 2D image coordinate system for each of the feature test points; and
identifying an object in the depth map based on the transformed feature test points.
16. The computer readable storage medium of claim 15, wherein the transforming the feature test points includes rotating the feature test points using a rotation matrix R=[{right arrow over (r1)}, {right arrow over (r2)}, {right arrow over (r3)}], where {right arrow over (r3)} is a unitized surface normal to the object, {right arrow over (r1)} is the component of an in-plane rotation-variant vector that is orthogonal to the surface normal, and {right arrow over (r2)} is the cross product between {right arrow over (r3)} and {right arrow over (r1)}.
17. The computer readable storage medium of claim 15, wherein the in-plane rotation is a rotation invariant angle at a reference point to the test depth pixel.
18. The computer readable storage medium of claim 15, wherein the out-of-plane rotation is defined by a surface normal to the object at the test depth pixel, the defining a local coordinate system for each of the plurality of points includes defining a z-axis of the local coordinate system as being the surface normal.
19. The computer readable storage medium of claim 15, wherein the defining a local coordinate system for each of the plurality of points includes:
determining a vector in 3D space from a first of the points to a point of interest;
determining a surface normal to the object; and
establishing an x-axis that is the component of the test feature vector that is orthogonal to the surface normal.
20. The computer readable storage medium of claim 19, wherein the defining a local coordinate system for each of the plurality of points includes establishing a y-axis as the cross product of the z-axis and the x-axis.
US13/155,293 2011-06-07 2011-06-07 Invariant features for computer vision Abandoned US20120314031A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/155,293 US20120314031A1 (en) 2011-06-07 2011-06-07 Invariant features for computer vision
US13/688,120 US8878906B2 (en) 2011-06-07 2012-11-28 Invariant features for computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US13/155,293 US20120314031A1 (en) 2011-06-07 2011-06-07 Invariant features for computer vision

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/688,120 Continuation US8878906B2 (en) 2011-06-07 2012-11-28 Invariant features for computer vision

Publications (1)

Publication Number Publication Date
US20120314031A1 true US20120314031A1 (en) 2012-12-13

Family

ID=47292845

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/155,293 Abandoned US20120314031A1 (en) 2011-06-07 2011-06-07 Invariant features for computer vision
US13/688,120 Active 2031-07-08 US8878906B2 (en) 2011-06-07 2012-11-28 Invariant features for computer vision

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/688,120 Active 2031-07-08 US8878906B2 (en) 2011-06-07 2012-11-28 Invariant features for computer vision

Country Status (1)

Country Link
US (2) US20120314031A1 (en)

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120306735A1 (en) * 2011-06-01 2012-12-06 Microsoft Corporation Three-dimensional foreground selection for vision system
US20130131836A1 (en) * 2011-11-21 2013-05-23 Microsoft Corporation System for controlling light enabled devices
US20140208274A1 (en) * 2013-01-18 2014-07-24 Microsoft Corporation Controlling a computing-based device using hand gestures
WO2014154839A1 (en) * 2013-03-27 2014-10-02 Mindmaze S.A. High-definition 3d camera device
US20140313136A1 (en) * 2013-04-22 2014-10-23 Fuji Xerox Co., Ltd. Systems and methods for finger pose estimation on touchscreen devices
US20140347443A1 (en) * 2013-05-24 2014-11-27 David Cohen Indirect reflection suppression in depth imaging
US20150139487A1 (en) * 2013-11-21 2015-05-21 Lsi Corporation Image processor with static pose recognition module utilizing segmented region of interest
US20150161437A1 (en) * 2013-10-30 2015-06-11 Ivan L. Mazurenko Image processor comprising gesture recognition system with computationally-efficient static hand pose recognition
US20150213646A1 (en) * 2014-01-28 2015-07-30 Siemens Aktiengesellschaft Method and System for Constructing Personalized Avatars Using a Parameterized Deformable Mesh
US20150302317A1 (en) * 2014-04-22 2015-10-22 Microsoft Corporation Non-greedy machine learning for high accuracy
US9310983B2 (en) * 2013-10-16 2016-04-12 3M Innovative Properties Company Adding, deleting digital notes from a group of digital notes
US20160196657A1 (en) * 2015-01-06 2016-07-07 Oculus Vr, Llc Method and system for providing depth mapping using patterned light
US9609242B2 (en) * 2015-06-25 2017-03-28 Intel Corporation Auto-correction of depth-sensing camera data for planar target surfaces
US9607215B1 (en) * 2014-09-24 2017-03-28 Amazon Technologies, Inc. Finger detection in 3D point cloud
US10152674B2 (en) 2012-01-16 2018-12-11 Texas Instruments Incorporated Accelerated decision tree execution
CN109074661A (en) * 2017-12-28 2018-12-21 深圳市大疆创新科技有限公司 Image processing method and equipment
US10175845B2 (en) 2013-10-16 2019-01-08 3M Innovative Properties Company Organizing digital notes on a user interface
CN109409311A (en) * 2018-11-07 2019-03-01 上海为森车载传感技术有限公司 A kind of limit for height method for early warning based on binocular stereo vision
US20190146313A1 (en) * 2017-11-14 2019-05-16 Texas Instruments Incorporated Camera-assisted arbitrary surface characterization and correction
US20190166339A1 (en) * 2017-11-14 2019-05-30 Texas Instruments Incorporated Camera-assisted arbitrary surface characterization and slope-based correction
US10325184B2 (en) * 2017-04-12 2019-06-18 Youspace, Inc. Depth-value classification using forests
CN110059594A (en) * 2019-04-02 2019-07-26 北京旷视科技有限公司 A kind of environment sensing adapting to image recognition methods and device
CN110097575A (en) * 2019-04-28 2019-08-06 电子科技大学 A kind of method for tracking target based on local feature and scale pond
US10380767B2 (en) 2016-08-01 2019-08-13 Cognex Corporation System and method for automatic selection of 3D alignment algorithms in a vision system
US20190304175A1 (en) * 2018-03-30 2019-10-03 Konica Minolta Laboratory U.S.A., Inc. Three-dimensional modeling scanner
US10437342B2 (en) 2016-12-05 2019-10-08 Youspace, Inc. Calibration systems and methods for depth-based interfaces with disparate fields of view
CN111079819A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Method for judging state of coupler knuckle pin of railway wagon based on image recognition and deep learning
US10957072B2 (en) 2018-02-21 2021-03-23 Cognex Corporation System and method for simultaneous consideration of edges and normals in image features by a vision system
US11093737B2 (en) * 2018-08-14 2021-08-17 Boe Technology Group Co., Ltd. Gesture recognition method and apparatus, electronic device, and computer-readable storage medium
US11573641B2 (en) * 2018-03-13 2023-02-07 Magic Leap, Inc. Gesture recognition system and method of using same
US11716487B2 (en) * 2015-11-11 2023-08-01 Sony Corporation Encoding apparatus and encoding method, decoding apparatus and decoding method
US12141366B2 (en) * 2023-01-05 2024-11-12 Magic Leap, Inc. Gesture recognition system and method of using same

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9940545B2 (en) * 2013-09-20 2018-04-10 Change Healthcare Llc Method and apparatus for detecting anatomical elements
US9811721B2 (en) * 2014-08-15 2017-11-07 Apple Inc. Three-dimensional hand tracking using depth sequences
TWI610250B (en) * 2015-06-02 2018-01-01 鈺立微電子股份有限公司 Monitor system and operation method thereof
US10048765B2 (en) * 2015-09-25 2018-08-14 Apple Inc. Multi media computing or entertainment system for responding to user presence and activity
GB2543777B (en) * 2015-10-27 2018-07-25 Imagination Tech Ltd Systems and methods for processing images of objects
US9734435B2 (en) * 2015-12-31 2017-08-15 Microsoft Technology Licensing, Llc Recognition of hand poses by classification using discrete values
US10078377B2 (en) 2016-06-09 2018-09-18 Microsoft Technology Licensing, Llc Six DOF mixed reality input by fusing inertial handheld controller with hand tracking
US10929654B2 (en) * 2018-03-12 2021-02-23 Nvidia Corporation Three-dimensional (3D) pose estimation from a monocular camera
US11227435B2 (en) 2018-08-13 2022-01-18 Magic Leap, Inc. Cross reality system
US10957112B2 (en) 2018-08-13 2021-03-23 Magic Leap, Inc. Cross reality system
CN113196209A (en) 2018-10-05 2021-07-30 奇跃公司 Rendering location-specific virtual content at any location
JP2022551733A (en) 2019-10-15 2022-12-13 マジック リープ, インコーポレイテッド Cross-reality system with localization service
US11632679B2 (en) 2019-10-15 2023-04-18 Magic Leap, Inc. Cross reality system with wireless fingerprints
JP2022551734A (en) 2019-10-15 2022-12-13 マジック リープ, インコーポレイテッド Cross-reality system that supports multiple device types
CN114616509A (en) 2019-10-31 2022-06-10 奇跃公司 Cross-reality system with quality information about persistent coordinate frames
WO2021096931A1 (en) 2019-11-12 2021-05-20 Magic Leap, Inc. Cross reality system with localization service and shared location-based content
EP4073763A4 (en) 2019-12-09 2023-12-27 Magic Leap, Inc. Cross reality system with simplified programming of virtual content
EP4103910A4 (en) 2020-02-13 2024-03-06 Magic Leap, Inc. Cross reality system with accurate shared maps
JP2023514207A (en) 2020-02-13 2023-04-05 マジック リープ, インコーポレイテッド Cross-reality system with prioritization of geolocation information for localization
JP2023514208A (en) 2020-02-13 2023-04-05 マジック リープ, インコーポレイテッド Cross-reality system with map processing using multi-resolution frame descriptors
CN115461787A (en) 2020-02-26 2022-12-09 奇跃公司 Cross reality system with quick positioning
JP2023524446A (en) 2020-04-29 2023-06-12 マジック リープ, インコーポレイテッド Cross-reality system for large-scale environments

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060285755A1 (en) * 2005-06-16 2006-12-21 Strider Labs, Inc. System and method for recognition in 2D images using 3D class models
US20090016604A1 (en) * 2007-07-11 2009-01-15 Qifa Ke Invisible Junction Features for Patch Recognition
US20090157649A1 (en) * 2007-12-17 2009-06-18 Panagiotis Papadakis Hybrid Method and System for Content-based 3D Model Search
US20090185746A1 (en) * 2008-01-22 2009-07-23 The University Of Western Australia Image recognition
US20100246915A1 (en) * 2009-03-27 2010-09-30 Mitsubishi Electric Corporation Patient registration system
US20120219188A1 (en) * 2009-10-19 2012-08-30 Metaio Gmbh Method of providing a descriptor for at least one feature of an image and method of matching features

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6711293B1 (en) 1999-03-08 2004-03-23 The University Of British Columbia Method and apparatus for identifying scale invariant features in an image and use of same for locating an object in an image
US7133572B2 (en) 2002-10-02 2006-11-07 Siemens Corporate Research, Inc. Fast two dimensional object localization based on oriented edges
US7689033B2 (en) 2003-07-16 2010-03-30 Microsoft Corporation Robust multi-view face detection methods and apparatuses
US7274832B2 (en) 2003-11-13 2007-09-25 Eastman Kodak Company In-plane rotation invariant object detection in digitized images
US8503720B2 (en) 2009-05-01 2013-08-06 Microsoft Corporation Human body pose estimation
US20110025689A1 (en) 2009-07-29 2011-02-03 Microsoft Corporation Auto-Generating A Visual Representation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060285755A1 (en) * 2005-06-16 2006-12-21 Strider Labs, Inc. System and method for recognition in 2D images using 3D class models
US20090016604A1 (en) * 2007-07-11 2009-01-15 Qifa Ke Invisible Junction Features for Patch Recognition
US20090157649A1 (en) * 2007-12-17 2009-06-18 Panagiotis Papadakis Hybrid Method and System for Content-based 3D Model Search
US20090185746A1 (en) * 2008-01-22 2009-07-23 The University Of Western Australia Image recognition
US20100246915A1 (en) * 2009-03-27 2010-09-30 Mitsubishi Electric Corporation Patient registration system
US20120219188A1 (en) * 2009-10-19 2012-08-30 Metaio Gmbh Method of providing a descriptor for at least one feature of an image and method of matching features

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120306735A1 (en) * 2011-06-01 2012-12-06 Microsoft Corporation Three-dimensional foreground selection for vision system
US9594430B2 (en) * 2011-06-01 2017-03-14 Microsoft Technology Licensing, Llc Three-dimensional foreground selection for vision system
US20130131836A1 (en) * 2011-11-21 2013-05-23 Microsoft Corporation System for controlling light enabled devices
US9628843B2 (en) * 2011-11-21 2017-04-18 Microsoft Technology Licensing, Llc Methods for controlling electronic devices using gestures
US10152674B2 (en) 2012-01-16 2018-12-11 Texas Instruments Incorporated Accelerated decision tree execution
US11429872B2 (en) 2012-01-16 2022-08-30 Texas Instruments Incorporated Accelerated decision tree execution
US20140208274A1 (en) * 2013-01-18 2014-07-24 Microsoft Corporation Controlling a computing-based device using hand gestures
WO2014154839A1 (en) * 2013-03-27 2014-10-02 Mindmaze S.A. High-definition 3d camera device
US20140313136A1 (en) * 2013-04-22 2014-10-23 Fuji Xerox Co., Ltd. Systems and methods for finger pose estimation on touchscreen devices
US9069415B2 (en) * 2013-04-22 2015-06-30 Fuji Xerox Co., Ltd. Systems and methods for finger pose estimation on touchscreen devices
US20140347443A1 (en) * 2013-05-24 2014-11-27 David Cohen Indirect reflection suppression in depth imaging
US9729860B2 (en) * 2013-05-24 2017-08-08 Microsoft Technology Licensing, Llc Indirect reflection suppression in depth imaging
US9310983B2 (en) * 2013-10-16 2016-04-12 3M Innovative Properties Company Adding, deleting digital notes from a group of digital notes
US10175845B2 (en) 2013-10-16 2019-01-08 3M Innovative Properties Company Organizing digital notes on a user interface
US10698560B2 (en) 2013-10-16 2020-06-30 3M Innovative Properties Company Organizing digital notes on a user interface
US20150161437A1 (en) * 2013-10-30 2015-06-11 Ivan L. Mazurenko Image processor comprising gesture recognition system with computationally-efficient static hand pose recognition
US20150139487A1 (en) * 2013-11-21 2015-05-21 Lsi Corporation Image processor with static pose recognition module utilizing segmented region of interest
US9524582B2 (en) * 2014-01-28 2016-12-20 Siemens Healthcare Gmbh Method and system for constructing personalized avatars using a parameterized deformable mesh
US20150213646A1 (en) * 2014-01-28 2015-07-30 Siemens Aktiengesellschaft Method and System for Constructing Personalized Avatars Using a Parameterized Deformable Mesh
US20150302317A1 (en) * 2014-04-22 2015-10-22 Microsoft Corporation Non-greedy machine learning for high accuracy
US9607215B1 (en) * 2014-09-24 2017-03-28 Amazon Technologies, Inc. Finger detection in 3D point cloud
US20160196657A1 (en) * 2015-01-06 2016-07-07 Oculus Vr, Llc Method and system for providing depth mapping using patterned light
US9609242B2 (en) * 2015-06-25 2017-03-28 Intel Corporation Auto-correction of depth-sensing camera data for planar target surfaces
TWI737608B (en) * 2015-06-25 2021-09-01 美商英特爾股份有限公司 Method, apparatus, and non-transitory computer readable media for auto-correction of depth-sensing camera data for planar target surfaces
US11716487B2 (en) * 2015-11-11 2023-08-01 Sony Corporation Encoding apparatus and encoding method, decoding apparatus and decoding method
US10380767B2 (en) 2016-08-01 2019-08-13 Cognex Corporation System and method for automatic selection of 3D alignment algorithms in a vision system
US10437342B2 (en) 2016-12-05 2019-10-08 Youspace, Inc. Calibration systems and methods for depth-based interfaces with disparate fields of view
US10325184B2 (en) * 2017-04-12 2019-06-18 Youspace, Inc. Depth-value classification using forests
US11258997B2 (en) 2017-11-14 2022-02-22 Texas Instruments Incorporated Camera-assisted arbitrary surface characterization and slope-based correction
US20190146313A1 (en) * 2017-11-14 2019-05-16 Texas Instruments Incorporated Camera-assisted arbitrary surface characterization and correction
US11592732B2 (en) * 2017-11-14 2023-02-28 Texas Instruments Incorporated Camera-assisted arbitrary surface characterization and correction
US20190166339A1 (en) * 2017-11-14 2019-05-30 Texas Instruments Incorporated Camera-assisted arbitrary surface characterization and slope-based correction
US10681318B2 (en) * 2017-11-14 2020-06-09 Texas Instruments Incorporated Camera-assisted arbitrary surface characterization and slope-based correction
US10684537B2 (en) * 2017-11-14 2020-06-16 Texas Instruments Incorporated Camera-assisted arbitrary surface characterization and correction
CN109074661A (en) * 2017-12-28 2018-12-21 深圳市大疆创新科技有限公司 Image processing method and equipment
WO2019127192A1 (en) * 2017-12-28 2019-07-04 深圳市大疆创新科技有限公司 Image processing method and apparatus
US10957072B2 (en) 2018-02-21 2021-03-23 Cognex Corporation System and method for simultaneous consideration of edges and normals in image features by a vision system
US11881000B2 (en) 2018-02-21 2024-01-23 Cognex Corporation System and method for simultaneous consideration of edges and normals in image features by a vision system
US11573641B2 (en) * 2018-03-13 2023-02-07 Magic Leap, Inc. Gesture recognition system and method of using same
US20230152902A1 (en) * 2018-03-13 2023-05-18 Magic Leap, Inc. Gesture recognition system and method of using same
US10650584B2 (en) * 2018-03-30 2020-05-12 Konica Minolta Laboratory U.S.A., Inc. Three-dimensional modeling scanner
US20190304175A1 (en) * 2018-03-30 2019-10-03 Konica Minolta Laboratory U.S.A., Inc. Three-dimensional modeling scanner
US11093737B2 (en) * 2018-08-14 2021-08-17 Boe Technology Group Co., Ltd. Gesture recognition method and apparatus, electronic device, and computer-readable storage medium
CN109409311A (en) * 2018-11-07 2019-03-01 上海为森车载传感技术有限公司 A kind of limit for height method for early warning based on binocular stereo vision
CN110059594A (en) * 2019-04-02 2019-07-26 北京旷视科技有限公司 A kind of environment sensing adapting to image recognition methods and device
CN110097575A (en) * 2019-04-28 2019-08-06 电子科技大学 A kind of method for tracking target based on local feature and scale pond
CN111079819A (en) * 2019-12-12 2020-04-28 哈尔滨市科佳通用机电股份有限公司 Method for judging state of coupler knuckle pin of railway wagon based on image recognition and deep learning
US12141366B2 (en) * 2023-01-05 2024-11-12 Magic Leap, Inc. Gesture recognition system and method of using same

Also Published As

Publication number Publication date
US8878906B2 (en) 2014-11-04
US20140002607A1 (en) 2014-01-02

Similar Documents

Publication Publication Date Title
US8878906B2 (en) Invariant features for computer vision
US8610723B2 (en) Fully automatic dynamic articulated model calibration
Memo et al. Head-mounted gesture controlled interface for human-computer interaction
US9344707B2 (en) Probabilistic and constraint based articulated model fitting
US8953844B2 (en) System for fast, probabilistic skeletal tracking
US8452051B1 (en) Hand-location post-process refinement in a tracking system
US8558873B2 (en) Use of wavefront coding to create a depth image
US8897491B2 (en) System for finger recognition and tracking
US8983233B2 (en) Time-of-flight depth imaging
US8929612B2 (en) System for recognizing an open or closed hand
US8660303B2 (en) Detection of body and props
US8638985B2 (en) Human body pose estimation
US20110317871A1 (en) Skeletal joint recognition and tracking system
US20150310256A1 (en) Depth image processing
US20140085625A1 (en) Skin and other surface classification using albedo

Legal Events

Date Code Title Description
AS Assignment

Owner name: MICROSOFT CORPORATION, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHOTTON, JAMIE D. J.;FINOCCHIO, MARK J.;MOORE, RICHARD E.;AND OTHERS;SIGNING DATES FROM 20110603 TO 20110606;REEL/FRAME:026412/0187

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034766/0509

Effective date: 20141014