[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Automatic Sign Language Analysis: A Survey and the Future beyond Lexical Meaning

Published: 01 June 2005 Publication History

Abstract

Research in automatic analysis of sign language has largely focused on recognizing the lexical (or citation) form of sign gestures as they appear in continuous signing, and developing algorithms that scale well to large vocabularies. However, successful recognition of lexical signs is not sufficient for a full understanding of sign language communication. Nonmanual signals and grammatical processes which result in systematic variations in sign appearance are integral aspects of this communication but have received comparatively little attention in the literature. In this survey, we examine data acquisition, feature extraction and classification methods employed for the analysis of sign language gestures. These are discussed with respect to issues such as modeling transitions between signs in continuous signing, modeling inflectional processes, signer independence, and adaptation. We further examine works that attempt to analyze nonmanual signals and discuss issues related to integrating these with (hand) sign gestures. We also discuss the overall progress toward a true test of sign recognition systems dealing with natural signing by native signers. We suggest some future directions for this research and also point to contributions it can make to other fields of research. Web-based supplemental materials (appendicies) which contain several illustrative examples and videos of signing can be found at www.computer.org/publications/dlib.

References

[1]
S. Akyol and P. Alvarado, “Finding Relevant Image Content for mobile Sign Language Recognition,” Proc. IASTED Int'l Conf. Signal Processing, Pattern Recognition and Application, pp. 48-52, 2001.]]
[2]
S. Akyol and U. Canzler, “An Information Terminal Using Vision Based Sign Language Recognition,” Proc. ITEA Workshop Virtual Home Environments, pp. 61-68, 2002.]]
[3]
O. Al-Jarrah and A. Halawani, “Recognition of Gestures in Arabic Sign Language Using Neuro-Fuzzy Systems,” Artificial Intelligence, vol. 133, pp. 117-138, Dec. 2001.]]
[4]
M. Assan and K. Grobel, “Video-Based Sign Language Recognition Using Hidden Markov Models,” Proc. Gesture Workshop, pp.nbsp97-109, 1997.]]
[5]
C. Baker and C.A. Padden, “Focusing on the Nonmanual Components of American Sign Language,” Understanding Language through Sign Language Research, P. Siple, ed., pp. 27-57, 1978.]]
[6]
M.S. Bartlett H.M. Lades and T.J. Sejnowski, “Independent Component Representations for Face Recognition,” Proc. SPIE Conf. Human Vision and Electronic Imaging III, vol. 3299, pp. 528-539, 1998.]]
[7]
R. Battison, Lexical Borrowing in American Sign Language. Silver Spring, Md.: Linstok Press, 2003.]]
[8]
B. Bauer and K.-F. Kraiss, “Towards an Automatic Sign Language Recognition System Using Subunits,” Proc. Gesture Workshop, pp.nbsp64-75, 2001.]]
[9]
B. Bauer and K.-F. Kraiss, “Towards a 3rd Generation Mobile Telecommunication for Deaf People,” Proc. 10th Aachen Symp. Signal Theory Algorithms and Software for Mobile Comm., pp. 101-106, Sept. 2001.]]
[10]
B. Bauer and K.F. Kraiss, “Video-Based Sign Recognition Using Self-Organizing Subunits,” Proc. Int'l Conf. Pattern Recognition, vol. 2, pp. 434-437, 2002.]]
[11]
M. Billinghurst, “Put that Where? Voice and Gesture at the Graphics Interface,” ACM SIGGRAPH Computer Graphics, vol. 32, no. 4, pp. 60-63, Nov. 1998.]]
[12]
H. Birk T.B. Moeslund and C.B. Madsen, “Real-Time Recognition of Hand Alphabet Gestures Using Principal Component Analysis,” Proc. Scandinavian Conf. Image Analysis, pp. 261-268, 1997.]]
[13]
M.J. Black and Y. Yacoob, “Tracking and Recognizing Rigid and Non-Rigid Facial Motions Using Local Parametric Model of Image Motion,” Proc. Int'l Conf. Computer Vision, pp. 374-381, 1995.]]
[14]
Augmentative Communication: An Introduction, S. Blackstone, ed. Rockville, Md.: Am. Speech-Language-Hearing Assoc., 1986.]]
[15]
B. Bossard A. Braffort and M. Jardino, “Some Issues in Sign Language Processing,” Proc. Gesture Workshop, pp. 90-100, 2003.]]
[16]
H. Bourlard, “Nonstationary Multi-Channel (Multi-Stream) Processing Towards Robust and Adaptive ASR,” Proc. Tampere Workshop Robust Methods for Speech Recognition in Adverse Conditions, pp. 1-10, 1995.]]
[17]
R. Bowden and M. Sarhadi, “A Nonlinear Model of Shape and Motion for Tracking Fingerspelt American Sign Language,” Image and Vision Computing, vol. 20, pp. 597-607, 2002.]]
[18]
G. Bradski, “Computer Vision Face Tracking for Use in Perceptual User Interface,” Intel Technical J., second quarter 1998.]]
[19]
A. Braffort, “ARGo: An Architecture for Sign Language Recognition and Interpretation,” Proc. Gesture Workshop, pg. 17-30, 1996.]]
[20]
A. Braffort, “Research on Computer Science and Sign Language: Ethical Aspects,” Proc. Gesture Workshop, pp. 1-8, 2001.]]
[21]
H. Brashear T. Starner P. Lukowicz and H. Junker, “Using Multiple Sensors for Mobile Sign Language Recognition,” Proc. Int'l Symp. Wearable Computers, pp. 45-52, Oct. 2003.]]
[22]
D. Brentari, “Sign Language Phonology: ASL,” The Handbook of Phonological Theory, J.A. Goldsmith, ed., pp. 615-639, 1995.]]
[23]
B. Bridges and M. Metzger, Deaf Tend Your: Non-Manual Signals in American Sign Language. Calliope Press, 1996.]]
[24]
U. Canzler and T. Dziurzyk, “Extraction of Non Manual Features for Videobased Sign Language Recognition,” Proc. IAPR Workshop Machine Vision Application, pp. 318-321, 2002.]]
[25]
F.-S. Chen C.-M. Fu and C.-L. Huang, “Hand Gesture Recognition Using a Real-Time Tracking Method and Hidden Markov Models,” Image and Vision Computing, vol. 21, no. 8, pp. 745-758, 2003.]]
[26]
H.-I. Choi and P.-K. Rhee, “Head Gesture Recognition Using HMMs,” Expert Systems with Applications, vol. 17, pp. 213-221, 1999.]]
[27]
T.F. Cootes C.J. Taylor D.H. Cooper and J. Graham, “Active Shape Models-Their Training and Application,” Computer Vision Image Understanding, vol. 61, no. 1, pp. 38-59, 1995.]]
[28]
T. Cootes K. Walker and C. Taylor, “View-Based Active Appearance Models,” Proc. Int'l Conf. Auto. Face & Gesture Recognition, pp. 227-232, 2000.]]
[29]
S. Corazza, “The Morphology of Classifier Handshapes in Italian Sign Language (LIS),” Sign Language Research: Theoretical Issues, C.nbspLucas, ed., Washington, D.C.: Gallaudet Univ. Press, 1990.]]
[30]
E. Cox, “Adaptive Fuzzy Systems,” IEEE Spectrum, pp. 27-31, Feb. 1993.]]
[31]
J.L. Crowley J. Coutaz F. Berard, “Things That See,” Comm. ACM, vol. 43, no. 3, pp. 54-64, Mar. 2000.]]
[32]
Y. Cui and J. Weng, “A Learning-Based Prediction-and-Verification Segmentation Scheme for Hand Sign Image Sequence,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 21, no. 8, pp. 798-804, Aug. 1999.]]
[33]
Y. Cui and J. Weng, “Appearance-Based Hand Sign Recognition from Intensity Image Sequences,” Computer Vision Image Understanding, vol. 78, no. 2, pp. 157-176, 2000.]]
[34]
CyberGlove User's Manual. Virtual Technologies Inc., 1995.]]
[35]
DataGlove Model 2 User's Manual. Redwood City, Calif.: VPL Research Inc., 1987.]]
[36]
J.-W. Deng and H.T. Tsui, “A Novel Two-Layer PCA/MDA Scheme for Hand Posture Recognition,” Proc. Int'l Conf. Pattern Recognition, vol. 1, pp. 283-286, 2002.]]
[37]
G. Donato M.S. Bartlett J.C. Hager P. Ekman and T.J. Sejnowski, “Classifying Facial Actions,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 21, no. 10, pp. 974-989, Oct. 1999.]]
[38]
B. Dorner, “Chasing the Colour Glove: Visual Hand Tracking,” Master's thesis, Simon Fraser Univ., 1994.]]
[39]
A.C. Downton and H. Drouet, “Model-Based Image Analysis for Unconstrained Human Upper-Body Motion,” Proc. Int'l Conf Image Processing and Its Applications, pp. 274-277, Apr. 1992.]]
[40]
M.-P. Dubuisson and A.K. Jain, “A Modified Hausdorff Distance for Object Matching,” Proc. Int'l Conf. Pattern Recognition, pp. 566-568, 1994.]]
[41]
A.D.N. Edwards, “Progress in Sign Language Recognition,” Proc. Gesture Workshop, pp. 13-21, 1997.]]
[42]
P. Ekman, Emotion in the Human Face. Cambridge Univ. Press, 1982.]]
[43]
U.M. Erdem and S. Sclaroff, “Automatic Detection of Relevant Head Gestures in American Sign Language Communication,” Proc. Int'l Conf. Pattern Recognition, vol. 1, pp. 460-463, 2002.]]
[44]
R. Erenshteyn P. Laskov R. Foulds L. Messing and G. Stern, “Recognition Approach to Gesture Language Understanding,” Proc. Int'l Conf. Pattern Recognition, vol. 3, pp. 431-435, 1996.]]
[45]
G. Fang W. Gao X. Chen C. Wang and J. Ma, “Signer-Independent Continuous Sign Language Recognition Based on SRN/HMM,” Proc. Gesture Workshop, pp. 76-85, 2001.]]
[46]
B. Fasel and J. Luettin, “Recognition of Asymmetric Facial Action Unit Activities and Intensities,” Proc. Int'l Conf. Pattern Recognition, vol. 1, pp. 1100-1103, 2000.]]
[47]
B. Fasel and J. Luettin, “Automatic Facial Expression Analysis: A Survey,” Pattern Recognition, vol. 36, pp. 259-275, 2003.]]
[48]
H. Fillbrandt S. Akyol and K.-F. Kraiss, “Extraction of 3D Hand Shape and Posture from Images Sequences from Sign Language Recognition,” Proc. Int'l Workshop Analysis and Modeling of Faces and Gestures, pp. 181-186, 2003.]]
[49]
W. Gao J. Ma S. Shan X. Chen W. Zheng H. Zhang J. Yan and J. Wu, “HandTalker: A Multimodal Dialog System Using Sign Language and 3-D Virtual Human,” Proc. Int'l Conf. Advances in Multimodal Interfaces, pp. 564-571, 2000.]]
[50]
W. Gao J. Ma J. Wu and C. Wang, “Sign Language Recognition Based on HMM/ANN/DP,” Int'l J. Pattern Recognition Artificial Intelligence, vol. 14, no. 5, pp. 587-602, 2000.]]
[51]
W. Gao G. Fang D. Zhao and Y. Chen, “Transition Movement Models for Large Vocabulary Continuous Sign Language Recognition,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp.nbsp553-558, 2004.]]
[52]
J.-L. Gauvain and C.-H. Lee, “Maximum a Posteriori Estimation for Multivariate Gaussian Mixture Observation of Markov Chain,” IEEE Trans. Speech and Audio Processing, vol. 2, pp. 291-298, Apr. 1994.]]
[53]
D. Gavrila, “The Visual Analysis of Human Movement: A Survey,” Computer Vision Image Understanding, vol. 73 pp. 82-98, no. 1, Jan. 1999.]]
[54]
S. Gibet J. Richardson T. Lebourque and A. Braffort, “Corpus of 3D Natural Movements and Sign Language Primitives of Movement,” Proc. Gesture Workshop, 1997.]]
[55]
L. Gupta and S. Ma, “Gesture-Based Interaction and Communication: Automated Classification of Hand Gesture Contours,” IEEE Trans. Systems, Man, and Cybernetics, Part C: Application Rev., vol. 31,no. 1, pp. 114-120, Feb. 2001.]]
[56]
M. Handouyahia D. Ziou and S. Wang, “Sign Language Recognition Using Moment-Based Size Functions,” Proc. Int'l Conf. Vision Interface, pp. 210-216, 1999.]]
[57]
P.A. Harling and A.D.N. Edwards, “Hand Tension as a Gesture Segmentation Cue,” Proc. Gesture Workshop, pp. 75-88, 1996.]]
[58]
D. Heckerman, “A Tutorial on Learning with Bayesian Networks,” technical report, Microsoft Research, Mar. 1995.]]
[59]
J.L. Hernandez-Rebollar R.W. Lindeman and N. Kyriakopoulos, “A Multi-Class Pattern Recognition System for Practical Finger Spelling Translation,” Proc. Int'l Conf. Multimodal Interfaces, pp.nbsp185-190, 2002.]]
[60]
J.L. Hernandez-Rebollar N. Kyriakopoulos and R.W. Lindeman, “A New Instrumented Approach for Translating American Sign Language into Sound and Text,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 547-552, 2004.]]
[61]
H. Hienz K. Grobel and G. Offner, “Real-Time Hand-Arm Motion Analysis Using a Single Video Camera,” Proc Int'l Conf. Automatic Face and Gesture Recognition, pp. 323-327, 1996.]]
[62]
H. Hienz and K. Grobel, “Automatic Estimation of Body Regions from Video Image,” Proc. Gesture Workshop, pp. 135-145, 1997.]]
[63]
E.-J. Holden and R. Owens, “Visual Sign Language Recognition,” Proc. Int'l Workshop Theoretical Foundations of Computer Vision, pp.nbsp270-287, 2000.]]
[64]
G. Hommel F.G. Hofmann and J. Henz, “The TU Berlin High-Precision Sensor Glove,” Proc. Fourth Int'l Scientific Conf., vol. 2, pp. F47-F49, 1994.]]
[65]
C.-L. Huang and W.-Y. Huang, “Sign Language Recognition Using Model-Based Tracking and a 3D Hopfield Neural Network,” Machine Vision and Application, vol. 10, pp. 292-307, 1998.]]
[66]
C.-L. Huang and S.-H. Jeng, “A Model-Based Hand Gesture Recognition System,” Machine Vision and Application, vol. 12, no. 5, pp. 243-258, 2001.]]
[67]
K. Imagawa S. Lu and S. Igi, “Color-Based Hand Tracking System for Sign Language Recognition,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 462-467, 1998.]]
[68]
K. Imagawa H. Matsuo R.-i. Taniguchi D. Arita S. Lu and S. Igi, “Recognition of Local Features for Camera-Based Sign Language Recognition System,” Proc. Int'l Conf. Pattern Recognition, vol 4, pp.nbsp849-853, 2000.]]
[69]
J.-S.R. Jang, “ANFIS: Adaptive-Network-Based Fuzzy Inference System,” IEEE Trans. Systems, Man, and Cybernetics, vol. 23, no. 3, pp.nbsp665-685, May-June 1993.]]
[70]
F. Jelinek, Statistical Methods For Speech Recognition. MIT Press, 1998.]]
[71]
T. Johnston, “Auslan: The Sign Language of the Australian Deaf Community,” PhD thesis, Dept. of Linguistics, Univ. of Sydney, 1989.]]
[72]
M.W. Kadous, “Machine Recognition of Auslan Signs Using PowerGloves: Towards Large-Lexicon Recognition of Sign Language,” Proc. Workshop Integration of Gestures in Language and Speech, pp. 165-174, 1996.]]
[73]
M.W. Kadous, “Learning Comprehensible Descriptions of Multivariate Time Series,” Proc. Int'l Conf. Machine Learning, pp. 454-463, 1999.]]
[74]
N. Kambhatla and T.K. Leen, “Dimension Reduction by Local Principal Component Analysis,” Neural Computation, vol. 9, no. 7, pp. 1493-1516, Oct. 1997.]]
[75]
K. Kanda A. Ichikawa Y. Nagashima Y. Kato M. Terauchi D. Hara and M. Sato, “Notation System and Statistical Analysis of NMS in JSL,” Proc. Gesture Workshop, pp. 181-192, 2001.]]
[76]
A. Kendon, “How Gestures Can Become Like Words,” Cross-Cultural Perspectives in Nonverbal Comm., F. Poyatos, ed., pp. 131-141, 1988.]]
[77]
A. Kendon, “Human Gesture,” Tools, Language, and Cognition in Human Evolution, K.R. Gibson and T. Ingold, eds., pp. 43-62, Cambridge Univ. Press, 1993.]]
[78]
R. Kennaway, “Experience with and Requirements for a Gesture Description Language for Synthetic Animation,” Proc. Gesture Workshop, pp. 300-311, 2003.]]
[79]
K.W. Ming and S. Ranganath, “Representations for Facial Expressions,” Proc. Int'l Conf. Control Automation, Robotics and Vision, vol. 2, pp. 716-721, Dec. 2002.]]
[80]
J.-S. Kim W. Jang and Z. Bien, “A Dynamic Gesture Recognition System for the Korean Sign Language (KSL),” IEEE Trans. Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 26, no. 2, pp. 354-359, Apr. 1996.]]
[81]
E.S. Klima and U. Bellugi, The Signs of Language. Harvard Univ. Press, 1979.]]
[82]
T. Kobayashi and S. Haruyama, “Partly-Hidden Markov Model and Its Application to Gesture Recognition,” Proc Int'l Conf. Acoustics, Speech and Signal Processing, vol. 4, pp. 3081-3084, 1997.]]
[83]
A. Koizumi H. Sagawa and M. Takeuchi, “An Annotated Japanese Sign Language Corpus,” Proc. Int'l Conf. Language Resources and Evaluation, vol. III, pp. 927-930, 2002.]]
[84]
S.G. Kong J. Heo B.R. Abidi J. Paik and M.A. Abidi, “Recent Advances in Visual and Infrared Face Recognition-A Review,” Computer Vision Image Understanding, 2004.]]
[85]
W.W. Kong and S. Ranganath, “3-D Hand Trajectory Recognition for Signing Exact English,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 535-540, 2004.]]
[86]
J. Kramer and L. Leifer, “The Talking Glove: An Expressive and Receptive Verbal Communication Aid for the Deaf, Deaf-Blind, and Nonvocal,” Proc. Third Ann. Conf. Computer Technology, Special Education, Rehabilitation, pp. 335-340, Oct. 1987.]]
[87]
V. Krüger A. Happe and G. Sommer, “Affine Real-Time Face Tracking Using Gabor Wavelet Networks,” Proc. Int'l Conf. Pattern Recognition, vol. 1, pp. 127-130, Sept. 2000.]]
[88]
M. La Cascia S. Sclaroff and V. Athitsos, “Fast, Reliable Head Tracking Under Varying Illumination: An Approach Based on Registration of Texture-Mapped 3D Models,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 22, no. 4, pp. 322-336, Apr. 2000.]]
[89]
R.-H. Liang and M. Ouhyoung, “A Real-Time Continuous Gesture Recognition System for Sign Language,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 558-565, 1998.]]
[90]
S.K. Liddell and R.E. Johnson, “American Sign Language: The Phonological Base,” Sign Language Studies, vol. 64, pp. 195-277, 1989.]]
[91]
S.K. Liddell, Grammar, Gesture, and Meaning in American Sign Language. Cambridge Univ. Press, 2003.]]
[92]
J. Ma W. Gao and R. Wang, “A Parallel Multistream Model for Integration of Sign Language Recognition and Lip Motion,” Proc. Int'l Conf. Advances in Multimodal Interfaces, pp. 582-589, 2000.]]
[93]
H. Matsuo S. Igi S. Lu Y. Nagashima Y. Takata and T. Teshima, “The Recognition Algorithm with Non-Contact for Japanese Sign Language Using Morphological Analysis,” Proc. Gesture Workshop, pp. 273-285, 1997.]]
[94]
D. McNeill, Hand and Mind: What Gestures Reveal about Thought. Univ. of Chicago Press, 1992.]]
[95]
R.M. McGuire J. Hernandez-Rebollar T. Starner V. Henderson H. Brashear and D.S. Ross, “Towards a One-Way American Sign Language Translator,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 620-625, 2004.]]
[96]
K. Murakami and H. Taguchi, “Gesture Recognition Using Recurrent Neural Networks,” Proc. SIGCHI Conf. Human Factors in Computing Systems, pp. 237-242, 1991.]]
[97]
Y. Nam and K.Y. Wohn, “Recognition and Modeling of Hand Gestures Using Colored Petri Nets,” IEEE Trans. Systems, Man, and Cybernetics, Part A, vol. 29, no. 5, pp. 514-521, Sept. 1999.]]
[98]
C. Neidle J. Kegl D. MacLaughlin B. Bahan and R.G. Lee, The Syntax of American Sign Language: Functional Categories and Hierarchical Structure. MIT Press, 2000.]]
[99]
C. Neidle S. Sclaroff and V. Athitsos, “SignStream: A Tool for Linguistic and Computer Vision Research on Visual-Gestural Language Data,” Behavior Research Methods, Instruments and Computers, vol. 33, no. 3, pp. 311-320, 2001.]]
[100]
C. Nolker and H. Ritter, “Visual Recognition of Continuous Hand Postures,” IEEE Trans. Neural Networks, vol. 13, no. 4, pp. 983-994, July 2002.]]
[101]
E.-J. Ong and R. Bowden, “A Boosted Classifier Tree for Hand Shape Detection,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 889-894, 2004.]]
[102]
S.C.W. Ong and S. Ranganath, “Deciphering Gestures with Layered Meanings and Signer Adaptation,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 559-564, 2004.]]
[103]
C.A. Padden, “Interaction of Morphology and Syntax in American Sign Language, ” doctoral dissertation, Univ. of Calif., San Diego, 1983.]]
[104]
M. Pantic and L.J.M. Rothkrantz, “Automatic Analysis of Facial Expressions: The State of the Art,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 22, no. 12, pp. 1424-1445, Dec. 2000.]]
[105]
M. Pantic and L.J.M. Rothkrantz, “Expert System for Automatic Analysis of Facial Expressions,” Image and Vision Computing J., vol. 18,no. 11, pp. 881-905, 2000.]]
[106]
V.I. Pavlovic R. Sharma and T.S. Huang, “Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 19, no. 7, pp.nbsp677-695, July 1997.]]
[107]
A. Pentland, “Looking at People: Sensing for Ubiquitous and Wearable Computing,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 22, no. 1, pp. 107-119, Jan. 2000.]]
[108]
D. Perlmutter, “Sonority and Syllable Structure in American Sign Language,” Phonetics and Phonology: Current issues in ASL phonology, G. Coulter, ed., vol. 3, pp. 227-261, 1993.]]
[109]
H. Poizner E.S. Klima U. Bellugi and R.B. Livingston, “Motion Analysis of Grammatical Processes in a Visual-Gestural Language,” Proc. ACM SIGGRAPH/SIGART Interdisciplinary Workshop, pp. 271-292, 1983.]]
[110]
Polhemus 3Space User's Manual. Colchester, Vt.: Polhemus, 1991.]]
[111]
Power Glove Serial Interface (2.0 ed.), Student Chapter of the ACM, Univ. of Illinois at Urbana-Champaign, 1994.]]
[112]
F. Quek, “Toward a Vision-Based Hand Gesture Interface,” Proc. Virtual Reality Sofware and Technical Conf., pp. 17-29, 1994.]]
[113]
J.M. Rehg and T. Kanade, “Visual Tracking of High DOF Articulated Structures: An Application to Human Hand Tracking,” Proc. European Conf. Computer Vision, vol. 2, pp. 35-46, 1994.]]
[114]
H. Sagawa and M. Takeuchi, “A Method for Analyzing Spatial Relationships between Words in Sign Language Recognition,” Proc. Gesture Workshop, pp. 197-210, 1999.]]
[115]
H. Sagawa and M. Takeuchi, “A Method for Recognizing a Sequence of Sign Language Words Represented in a Japanese Sign Language Sentence,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 434-439, 2000.]]
[116]
H. Sagawa and M. Takeuchi, “Development of an Information Kiosk with a Sign Language Recognition System,” Proc. ACM Conf. Universal Usability, pp. 149-150, 2000.]]
[117]
H. Sako and A. Smith, “Real-Time Facial Expression Recognition Based on Features Position and Dimension,” Proc. Int'l Conf. Pattern Recognition, pp. 643-648, 1996.]]
[118]
W. Sandler, Phonological Representation of the Sign. Dordrecht: Foris, 1989.]]
[119]
S. Lu S. Igi H. Matsuo and Y. Nagashima, “Towards a Dialogue System Based on Recognition and Synthesis of Japanese Sign Language,” Proc. Gesture Workshop, pp. 259-271, 1997.]]
[120]
J. Sherrah and S. Gong, “Resolving Visual Uncertainty and Occlusion through Probabilistic Reasoning,” Proc. British Machine Vision Conf., pp. 252-261, 2000.]]
[121]
Sign Language J., Sign Factory, Japan, Spring 1996.]]
[122]
P. Simpson, “Fuzzy Min-Max Neural Networks-Part 1: Classification,” IEEE Trans. Neural Networks, vol. 3, pp. 776-786, 1992.]]
[123]
T. Starner J. Weaver and A. Pentland, “Real-Time American Sign Language Recognition Using Desk and Wearable Computer Based Video,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 20, no. 12, pp. 1371-1375, Dec. 1998.]]
[124]
W.C. Stokoe, “Sign Language Structure: An Outline of the Visual Communication System of the American Deaf,” Studies in Linguistics: Occasional Papers 8, 1960.]]
[125]
D.J. Sturman and D. Zeltzer, “A Survey of Glove-Based Input,” IEEE Computer Graphics and Applications, vol. 14, pp. 30-39, 1994.]]
[126]
M.-C. Su, “A Fuzzy Rule-Based Approach to Spatio-Temporal Hand Gesture Recognition,” IEEE Trans. Systems, Man, and Cybernetics, Part C: Application Rev., vol. 30, no. 2, pp. 276-281, May 2000.]]
[127]
M.-C. Su Y.-X. Zhao H. Huang and H.-F Chen, “A Fuzzy Rule-Based Approach to Recognizing 3-D Arm Movements,” IEEE Trans. Neural Systems Rehabilitation Eng., vol. 9, no. 2, pp. 191-201, June 2001.]]
[128]
T. Supalla and E. Newport, “How Many Seats in a Chair? The Derivation of Nouns and Verbs in American Sign Language,” Understanding Language through Sign Language Research, P. Siple, ed., pp. 91-133, 1978.]]
[129]
T. Supalla, “The Classifier System in American Sign Language,” Noun Classes and Categorization, C. Craig, ed., pp. 181-214, 1986.]]
[130]
R. Sutton-Spence and B. Woll, The linguistics of British Sign Language: An Introduction. Cambridge Univ. Press, 1998.]]
[131]
A. Sutherland, “Real-Time Video-Based Recognition of Sign Language Gestures Using Guided Template Matching,” Proc. Gesture Workshop, pp. 31-38, 1996.]]
[132]
G.J. Sweeney and A.C. Downton, “Towards Appearance-Based Multi-Channel Gesture Recognition,” Proc. Gesture Workshop, pp.nbsp7-16, 1996.]]
[133]
S. Tamura and S. Kawasaki, “Recognition of Sign Language Motion Images,” Pattern Recognition, vol. 21, no. 4, pp. 343-353, 1988.]]
[134]
J. Tang and R. Nakatsu, “A Head Gesture Recognition Algorithm,” Proc. Int'l Conf. Advances in Multimodal Integration, pp. 72-80, 2000.]]
[135]
N. Tanibata N. Shimada and Y. Shirai, “Extraction of Hand Features for Recognition of Sign Language Words,” Proc. Int'l Conf. Vision Interface, pp. 391-398, 2002.]]
[136]
J.-C. Terrillon A. Piplr Y. Niwa and K. Yamamoto, “Robust Face Detection and Japanese Sign Language Hand Posture Recognition for Human-Computer Interaction in an “Intelligent” Room,” Proc. Int'l Conf. Vision Interface, pp. 369-376, 2002.]]
[137]
C. Valli and C. Lucas, Linguistics of American Sign Language: A Resource Text for ASL Users. Washington, D.C.: Gallaudet Univ. Press, 1992.]]
[138]
P. Vamplew, “Recognition of Sign Language Using Neural Networks,” PhD thesis, Dept. of Computer Science, Univ. of Tasmania, May 1996.]]
[139]
P. Vamplew and A. Adams, “Recognition of Sign Language Gestures Using Neural Networks,” Australian J. Intelligence Information Processing Systems, vol. 5, no. 2, pp. 94-102, 1998.]]
[140]
P. Viola and M. Jones, “Robust Real-Time Object Detection,” Proc. IEEE Workshop Statistical and Computational Theories of Vision, 2001.]]
[141]
C. Vogler and D. Metaxas, “Adapting Hidden Markov Models for ASL Recognition by Using Three-Dimensional Computer Vision Methods,” Proc. Int'l Conf. Systems, Man, Cybernetics, vol. 1, pp.nbsp156-161, 1997.]]
[142]
C. Vogler H. Sun and D. Metaxas, “A Framework for Motion Recognition with Applications to American Sign Language and Gait Recognition,” Proc. IEEE Workshop Human Motion, pp. 33-38, 2000.]]
[143]
C. Vogler and D. Metaxas, “A Framework for Recognizing the Simultaneous Aspects of American Sign Language,” Computer Vision Image Understanding, vol. 81, pp. 358-384, 2001.]]
[144]
C. Vogler, “American Sign Language Recognition: Reducing the Complexity of the Task with Phoneme-Based Modeling and Parallel Hidden Markov Models,” PhD thesis, Univ. of Pennsylvania, 2003.]]
[145]
M.B. Waldron and S. Kim, “Isolated ASL Sign Recognition System for Deaf Persons,” IEEE Trans. Rehabilitation Eng., vol. 3, no. 3, pp.nbsp261-271, Sept. 1995.]]
[146]
C. Wang W. Gao and Z. Xuan, “A Real-Time Large Vocabulary Continuous Recognition System for Chinese Sign Language,” Proc. IEEE Pacific Rim Conf. Multimedia, pp. 150-157, 2001.]]
[147]
C. Wang W. Gao and S. Shan, “An Approach Based on Phonemes to Large Vocabulary Chinese Sign Language Recognition,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 393-398, 2002.]]
[148]
L. Wang W. Hu and T. Tan, “Recent Developments in Human Motion Analysis,” Pattern Recognition, vol. 36, pp. 585-601, 2003.]]
[149]
T. Watanabe and M. Yachida, “Real Time Gesture Recognition Using Eigenspace from Multi Input Image Sequence,” Proc. Int'l Conf. Automatic Face and Gesture Recognition, pp. 428-433, 1998.]]
[150]
R.B. Wilbur, “Syllables and Segments: Hold the Movement and Move the Holds!” Phonetics and Phonology: Current Issues in ASL Phonology, G. Coulter, ed., vol. 3, pp. 135-168, 1993.]]
[151]
R.B. Wilbur, “Phonological and Prosodic Layering of Nonmanuals in American Sign Language,” The Signs of Language Revisited: An Anthology to Honor Ursula Bellugi and Edward Klima, H. Lane and K.nbspEmmorey, eds., pp. 213-241, 2000.]]
[152]
A.D. Wilson and A.F. Bobick, “Parametric Hidden Markov Models for Gesture Recognition,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 21, no. 9, pp. 885-900, Sept. 1999.]]
[153]
J. Wu and W. Gao, “A Fast Sign Word Recognition Method for Chinese Sign Language,” Proc. Int'l Conf. Advances in Multimodal Interfaces, pp. 599-606, 2000.]]
[154]
J. Wu and W. Gao, “The Recognition of Finger-Spelling for Chinese Sign Language,” Proc. Gesture Workshop, pp. 96-100, 2001.]]
[155]
L. Wu S.L. Oviatt and P.R. Cohen, “Multimodal Integration-A Statistical View,” IEEE Trans. Multimedia, vol. 1, no. 4, pp. 334-341, 1999.]]
[156]
Y. Wu and T.S. Huang, “View-Independent Recognition of Hand Postures,” Proc. Conf. Computer Vision Pattern Recognition, vol. 2, pp. 88-94, 2000.]]
[157]
M. Xu B. Raytchev K. Sakaue O. Hasegawa A. Koizumi M. Takeuchi and H. Sagawa, “A Vision-Based Method for Recognizing Non-Manual Information in Japanese Sign Language,” Proc. Int'l Conf. Advances in Multimodal Interfaces, pp. 572-581, 2000.]]
[158]
M.-H. Yang N. Ahuja and M. Tabb, “Extraction of 2D Motion Trajectories and Its Application to Hand Gesture Recognition,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 24, no. 8, pp.nbsp1061-1074, Aug. 2002.]]
[159]
S. Young D. Kershaw J. Odell D. Ollason V. Valtchev and P. Woodland, The HTK Book (for HTK Version 3). Cambridge Univ., 1995.]]
[160]
S. Young, “A Review of Large-Vocabulary Continuous-Speech Recognition,” IEEE Signal Processing Magazine, pp. 45-57, Sept. 1996.]]
[161]
Q. Yuan W. Gao H. Yao and C. Wang, “Recognition of Strong and Weak Connection Models in Continuous Sign Language,” Proc. Int'l Conf. Pattern Recognition, vol. 1, pp. 75-78, 2002.]]
[162]
J. Zieren N. Unger and S. Akyol, “Hands Tracking from Frontal View for Vision-Based Gesture Recognition,” Proc. 24th DAGM Symp., pp. 531-539, 2002.]]
[163]
T.G. Zimmerman J. Lanier C. Blanchard S. Bryson and Y. Harvill, “Hand Gesture Interface Device,” Proc SIGCHI/GI Conf. Human Factors in Computing Systems and Graphics Interface, pp. 189-192, 1986.]]

Cited By

View all
  • (2024)SLR-YOLOJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-23513246:1(1663-1680)Online publication date: 1-Jan-2024
  • (2024)Self-Supervised Representation Learning With Spatial-Temporal Consistency for Sign Language RecognitionIEEE Transactions on Image Processing10.1109/TIP.2024.341688133(4188-4201)Online publication date: 1-Jan-2024
  • (2024)EvSign: Sign Language Recognition and Translation with Streaming EventsComputer Vision – ECCV 202410.1007/978-3-031-72652-1_20(335-351)Online publication date: 29-Sep-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence  Volume 27, Issue 6
June 2005
176 pages

Publisher

IEEE Computer Society

United States

Publication History

Published: 01 June 2005

Author Tags

  1. Index Terms- Sign language recognition
  2. Sign language recognition
  3. face tracking
  4. facial expression recognition
  5. gesture analysis
  6. hand gesture recognition
  7. hand tracking
  8. head gesture recognition
  9. head tracking
  10. review.

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)SLR-YOLOJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-23513246:1(1663-1680)Online publication date: 1-Jan-2024
  • (2024)Self-Supervised Representation Learning With Spatial-Temporal Consistency for Sign Language RecognitionIEEE Transactions on Image Processing10.1109/TIP.2024.341688133(4188-4201)Online publication date: 1-Jan-2024
  • (2024)EvSign: Sign Language Recognition and Translation with Streaming EventsComputer Vision – ECCV 202410.1007/978-3-031-72652-1_20(335-351)Online publication date: 29-Sep-2024
  • (2024)Intelligent language analysis method for multi‐sensor data fusionInternet Technology Letters10.1002/itl2.4417:2Online publication date: 7-Mar-2024
  • (2023)Multi-state feature optimization of sign glosses for continuous sign language recognitionJournal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology10.3233/JIFS-22360145:4(6645-6654)Online publication date: 1-Jan-2023
  • (2023)Self-emphasizing network for continuous sign language recognitionProceedings of the Thirty-Seventh AAAI Conference on Artificial Intelligence and Thirty-Fifth Conference on Innovative Applications of Artificial Intelligence and Thirteenth Symposium on Educational Advances in Artificial Intelligence10.1609/aaai.v37i1.25164(854-862)Online publication date: 7-Feb-2023
  • (2023)CLIP-Hand3D: Exploiting 3D Hand Pose Estimation via Context-Aware PromptingProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612390(4896-4907)Online publication date: 26-Oct-2023
  • (2022)Static and Dynamic Isolated Indian and Russian Sign Language Recognition with Spatial and Temporal Feature Detection Using Hybrid Neural NetworkACM Transactions on Asian and Low-Resource Language Information Processing10.1145/353098922:1(1-23)Online publication date: 25-Nov-2022
  • (2022)WearSignProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/35172576:1(1-27)Online publication date: 29-Mar-2022
  • (2022)Research on the improved gesture tracking algorithm in sign language synthesisThe Journal of Supercomputing10.1007/s11227-022-04705-y79:1(867-879)Online publication date: 20-Jul-2022
  • Show More Cited By

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media