[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Generalizing a model for animating adverbs of manner in American Sign Language

  • Published:
Machine Translation

Abstract

This work aims to show that a model produced to generate adverbs of manner can be generalized and applied to a variety of neutral animated signs for avatar sign language synthesis. This paper presents the generalization of a new approach that was first presented at SLTAT 2019 in Hamburg for modeling language processes that manifest themselves as modifications to the visual-manual channel. This work discusses extensions for generalizability to the model to be effective for a broader range of signs including one-handed and two-handed signs, repeating and non-repeating signs, signs with contact, and additional rotational adjustments to the wrists. This paper also includes interim results from an ongoing user study.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  • Bahan BJ (1997) Non-manual realization of agreement in American Sign Language

  • Baker-Shenk CL, Cokely D (1991) American Sign Language: a teacher’s resource text on grammar and culture. Gallaudet University Press, Washington, DC

    Google Scholar 

  • Baker-Shenk CL, Padden C (1978) Focusing on the nonmanual components of ASL. In: Siple P (ed) Understanding lanugage through sign language research: perspectives in neuroloinguistics and psycholinguistics. Academic Press, New York, pp 27–57

    Google Scholar 

  • Barraquand J, Latombe J-C (1991) Robot motion planning: a distributed representation approach. Int J Robot Res 10:628–649

    Article  Google Scholar 

  • Braem PB, Brentari D (2001) Functions of the mouthing component in the signing of deaf early and late learners of Swiss German Sign Language. In: Brentari D (ed) Foreign vocabulary in sign languages: a cross-linguistic investigation of word formation. Lawrence Erlbaum, Mahwah, pp 1–47

    Google Scholar 

  • Brentari D (1998) A prosodic model of sign language phonology. MIT Press, Cambridge

    Google Scholar 

  • Burtnyk N, Wein M (1976) Interactive skeleton techniques for enhancing motion dynamics in key frame animation. Commun ACM 19:564–569

    Article  Google Scholar 

  • Chi D, Costa M, Zhao L, Badler N (2000) The EMOTE model for effort and shape. In: Proceedings of the 27th annual conference on computer graphics and interactive techniques.

  • Delorme M, Filhol M, Braffort A (2009) Animation generation process for sign language synthesis. In: Conference on advances in computer-human interactions, IEEE, pp 386–390

  • Eisenbeis RA, Avery RB (1972) Discriminant analysis and classification procedures: theory and applications. DC Heath, Lexington

    Google Scholar 

  • Ekman P, Friesen WV (1971) Constants across cultures in the face and emotion. J Pers Soc Psychol 17(2):124

    Article  Google Scholar 

  • Emmorey K (2001) Language, cognition, and the brain: insights from sign language research. Psychology Press, Hove

    Book  Google Scholar 

  • Fischer S, Gough B (1978) Verbs in American sign language. Sign Lang Stud 18:17–48

    Article  Google Scholar 

  • Friedman LA (1975) Space, time, and person reference in American Sign Language. Language 51:940–961

    Article  Google Scholar 

  • Hartmann B, Mancini M, Pelachaud C (2005) Implementing expressive gesture synthesis for embodied conversational agents. In: International Gesture Workshop

  • Johnston T, De Beuzeville L (2016) Auslan corpus annotation guidelines. Auslan Corpus

  • Johnston O, Thomas F (1981) The illusion of life: Disney animation. Disney Editions, New York

    Google Scholar 

  • Kleinsmith A, Bianchi-Berthouze N (2012) Affective body expression perception and recognition: a survey. IEEE Trans Affect Comput 4:15–33

    Article  Google Scholar 

  • Kluwin TN (1981) A rationale for modifying classroom signing systems. Sign Lang Stud 31:179–187

    Google Scholar 

  • Liddell S (1978) Nonmanual signals and relative clauses in American Sign Language. In: Siple P (ed) Understanding language through sign language research. Academic Press, New York, pp 55–90

    Google Scholar 

  • Malala VD, Prigent E, Braffort A, Berret B (2018) Which picture? A methodology for the evaluation of sign language animation understandability. In: Multimodal signals: cognitive and algorithmic issues. pp 83–93

  • McDonald J, Wolfe R, Alkoby K, Carter R, Davidson MJ, Furst J, Hinkle D, Knoll B, Lancaster G, Smallwood L, et al (2005) Achieving consistency in an FK/IK interface for a seven degree of freedom kinematic chain

  • McDonald J, Wolfe R, Wilbur RB, Moncrief R, Malaia E, Fujimoto S, Baowidan S, Stec J (2016) A new tool to facilitate prosodic analysis of motion capture data and a data-driven technique for the improvement of avatar motion. In: Proceedings of language resources and evaluation conference (LREC)

  • McDonald J, Wolfe R, Johnson S, Baowidan S, Moncrief R, Guo N (2017) An improved framework for layering linguistic processes in sign language generation: why there should never be a “brows” tier. In: International conference on universal access in human-computer interaction, pp 41–54

  • Padden CA (2016) Interaction of morphology and syntax in American Sign Language. Routledge, London

    Book  Google Scholar 

  • Quinto-Pozos D (2011) Teaching American Sign Language to hearing adult learners. Annu Rev Appl Linguist 31:137

    Article  Google Scholar 

  • Rose C, Cohen MF, Bodenheimer B (1998) Verbs and adverbs: multidimensional motion interpolation. IEEE Comput Graph Appl 18(5):32–40

    Article  Google Scholar 

  • Schnepp JC, Wolfe RJ, McDonald JC, Toro JA (2012) Combining emotion and facial nonmanual signals in synthesized american sign language. In: Proceedings of the 14th international ACM SIGACCESS conference on computers and accessibility

  • Stokoe W (1960) Sign language structure. Studies in Linguistics, occasional papers 8. Buffalo, New York

    Google Scholar 

  • Thomson AJ, Martinet AV (1980) A practical English grammar. Oxford University Press, Oxford

    Google Scholar 

  • Valli C, Lucas C (2000) Linguistics of American sign language: an introduction. Gallaudet University Press, Washington, DC

    Google Scholar 

  • Weast T (2008) Questions in American Sign Language: a quantitative analysis of raised and lowered eyebrows. Doctoral dissertation, The University of Texas, Arlington

  • Zhao L, Costa M, Badler NL (2000) Interpreting movement manner. In: Proceedings of the computer animation 2000, pp 98–103

  • Zhao L, Kipper K, Schuler W, Vogler C, Badler N, Palmer M (2000) A machine translation system from English to American Sign Language. In: Conference of the Association for Machine Translation in the Americas

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Robyn Moncrief.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Moncrief, R. Generalizing a model for animating adverbs of manner in American Sign Language. Machine Translation 35, 345–362 (2021). https://doi.org/10.1007/s10590-021-09279-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10590-021-09279-9

Keywords

Navigation