[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/266180.266328acmconferencesArticle/Chapter ViewAbstractPublication PagesmmConference Proceedingsconference-collections
Article
Free access

QuickSet: multimodal interaction for distributed applications

Published: 01 November 1997 Publication History
First page of PDF

References

[1]
Bolt, R. A. 1980. "Put-That-There":Voice and gesture at the graphics interface. Computer Graphics. 14.3, pp. 262-270.
[2]
Baker, M. P., and Wickens, C. D. Human factors in virtual environments for the visual analysis of scientific data. Unpublished ms., University of Illinois.
[3]
Brooks, Frederic. 3-D user interfaces: When results matter, Invited presentation (unpublished), UIST'96, Seattle, 1996.
[4]
Brison, E. and Vigouroux, N. (unpublished ms.). Multimodal references: A generic fusion process. UR1T-URA CNRS. Universit~ Paul Sabatier, Toulouse, France.
[5]
Calder, J. 1987. Typed unification for natural language processing. In E. Klein and J. van Benthem (Eds.), Categories, Polymorphisms, and Unification. Centre for Cognitive Science, University of Edinburgh, Edinburgh, pp. 65-72.
[6]
Cheyer, A., and Julia L. 1995. Multimodal maps: An agentbased approach. International Conference on Cooperative Mult#nodal Communication (CMC/95), May I995. Eindhoven, Tile Netherlands. pp. 24-26.
[7]
Carpenter, R. 1990. Typed feature structures: Inheritance, (In)equality, and Extensionality. In W. Daelemans and O. Gazdar (Eds.), Proceedings of the 1TK Workshop: Inheritance in Natural Language Processing, Tilburg University, pp. 9-18.
[8]
Carpenter, R. 1992. 27re logic of typed feature structures. Cambridge University Press, Cambridge.
[9]
Christensen, J., Marks, J., and Shieber, S. Placing text labels on maps and diagrams, Graphics Gems IV, Heckbert, P. (ed.), Academic Press, Cambridge, Mass., 1994, 497-504.
[10]
Clarkson, J. D., and Yi, J. LeatherNet: A synthetic forces tactical training system for the USMC commander. Proceedings of the Sixth Conference on Computer Generated Forces and Behavioral Representation. Orlando, Florida, 1996 275-281.
[11]
Cohen, P. R. The Role of Natural Language in a Multimodal Interface. Proceedings of UIST'92, ACM Press, NY, 1992, 143- 149.
[12]
Cohen, P. R., Dalrymple, M., Moran, D. B., Pereira, F. C. N., Sullivan, J. W., Gargan, R. A., Schlossberg, J. L., Tyler, S. W., Synergistics use of direct manipulation and natural language, Proceedings of Human Factors in Computing Systems (CHI'89), ACM Press, New York, 1989, 227-234.
[13]
Cohen, P. R., Cheyer, A., Wang, M., and Baeg, S.C. An Open Agent Architecture. Proceedings of the AAAI Spring Symposium Series on Software Agents (March 21-22, Stanford), Stanford Univ., CA, 1994, 1-8.
[14]
Courtemanche, A.J. and Ceranowicz, A. ModSAF Development Status. Proceedings of the Fifth Conference on Computer Generated Forces and Behavioral Representation, Univ. Central Florida, Orlando, 1995, 3-13.
[15]
Cmz-Neira, C. D.J. Sandin, T. A. DeFanti, "Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE," Computer Graphics, ACM SIGGRAPH, August 1993, pp. 135-142.
[16]
Johnston, M., Cohen, P. R., McGee, D., Pitman, J., Oviatt, S. L., and Smith, I. Unification-based multimodal integration, Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL-97/EACL-97) Conference, Madrid, Spain, July, 1997.
[17]
Koons, D.B., C .J. Sparrell and K. R. Thorisson. 1993. Integrating simultaneous input from speech, gaze, and hand gestures. In Mark T. Maybury (ed.) Intelligent Multimedia Interfaces. AAAI Press/Mlr Press, Cambridge, MA, pp. 257- 276.
[18]
Manke, S., Finke, M., and Waibel, A., The use of dynamic writing information in a connectionist on-line cursive handwriting recognition system, Advances in Neural Information processing Systems 7 (~IIPS), 1994.
[19]
Moore, R. C., Dowding, J, Bratt, H., Gawron, M., and Cheyer, A. CommandTalk: A spoken-language interface for battlefield simulations, Proc. of the $rd Applied Natural Language Conference, Wash. DC, 1997.
[20]
Neal, J.G. and Shapiro, S.C. Intelligent multi-media interface technology. In j.W. Sullivan and S.W. Tyler, editors, Intelligent User Interfaces, chapter 3, pages 45-68. ACM Press Frontier Series, Addison Wesley Publishing Co., New York, New York, 1991.
[21]
Oviatt, S. L. Pen/Voice: Complementary multimodal communication, Proceedings of SpeechTech'92, New York, February, 1992, 238-241.
[22]
Oviatt, S.L. Multimodal interfaces for dynamic interactive maps. Proceedings of CHI'96 Human Factors in Computing Systems ACM Press, NY, 1996, 95-102.
[23]
Oviatt, S.L. Multimodal interactive maps: Designing for human performance, Human Computer Interaction, in press.
[24]
Oviatt, S. L, A. D~Angeli, and K. Kuhn. integration and synchronization of input modes during multimodal humancomputer interaction. Proceedings of the Conference on Human Factors in Computing Systems (CH! '97), ACM Press, NY, 1997, 415-422.
[25]
Oviatt, S. L., and Olsen, E., Integration themes in multimodal human-computer interaction, Proceedings of the International Conference on Spoken Language Processing, Acoustical Society of Japan, Yokohama, Japan, 1994, 551-554.
[26]
Pittman, J. A. Recognizing handwritten text Human Factors in Computing Systems (CHI'9}), 1991, 271.-275.
[27]
Stoakley, R., Conway, M., and Pauseh, R. Virtual reality on a WIM: interactive worlds in miniature, Proceedings. of Human Factors in Computing Systems (CHI'95), ACM Press, New York, 1995, 265-272.
[28]
Thorpe, J.4. The new technology of large scale simulator networking: Implications for mastering the art of warfighting, 9a Interservice Training Systems Conference, 1987, 492-501,
[29]
Vo, M. T. and Wood, C. Building an appUeation framework for speech and pen input integration in multimodal learning interfaces. International Conference on Acoustics, Speech, and Signal Processing, Atlanta, GA. 1996.
[30]
Waibel, A., Vo, M. T., Duchnowski, P., and Manko, $, Multimodal interfaces, Artificial Intelligence Review, 1995,
[31]
Wauchope, K. 1994. Eucalyptus: Integrating natural language input with a graphical user interface. Naval Research Laboratory, Report NRL/FR/$510--94-9711,
[32]
Zyda, M. J., Pratt, D. R., Monahan, J. G., and Wilson, 'K. P., NPSNET: Constructing a 3-D virtual world, Proceedings of the 1992 Symposium on Interactive 3-D Graphics, March, 1992,

Cited By

View all
  • (2024)Levels of Multimodal InteractionCompanion Proceedings of the 26th International Conference on Multimodal Interaction10.1145/3686215.3690153(51-55)Online publication date: 4-Nov-2024
  • (2024)Digital Forms for All: A Holistic Multimodal Large Language Model Agent for Health Data EntryProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596248:2(1-39)Online publication date: 15-May-2024
  • (2024)Body Language for VUIs: Exploring Gestures to Enhance Interactions with Voice User InterfacesProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660691(133-150)Online publication date: 1-Jul-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
MULTIMEDIA '97: Proceedings of the fifth ACM international conference on Multimedia
November 1997
444 pages
ISBN:0897919912
DOI:10.1145/266180
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 November 1997

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. agent architecture
  2. distributed interactive simulation
  3. gesture recognition
  4. multimodal interfaces
  5. natural language processing
  6. speech recognition

Qualifiers

  • Article

Conference

MM97: The Fifth Annual ACM International Multimedia Conference
November 9 - 13, 1997
Washington, Seattle, USA

Acceptance Rates

MULTIMEDIA '97 Paper Acceptance Rate 40 of 142 submissions, 28%;
Overall Acceptance Rate 2,145 of 8,556 submissions, 25%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)232
  • Downloads (Last 6 weeks)29
Reflects downloads up to 14 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Levels of Multimodal InteractionCompanion Proceedings of the 26th International Conference on Multimodal Interaction10.1145/3686215.3690153(51-55)Online publication date: 4-Nov-2024
  • (2024)Digital Forms for All: A Holistic Multimodal Large Language Model Agent for Health Data EntryProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596248:2(1-39)Online publication date: 15-May-2024
  • (2024)Body Language for VUIs: Exploring Gestures to Enhance Interactions with Voice User InterfacesProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3660691(133-150)Online publication date: 1-Jul-2024
  • (2024)Mind the Mix: Exploring the Cognitive Underpinnings of Multimodal Interaction in Augmented Reality SystemsExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650874(1-7)Online publication date: 11-May-2024
  • (2024)Elastica: Adaptive Live Augmented Presentations with Elastic Mappings Across ModalitiesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642725(1-19)Online publication date: 11-May-2024
  • (2024)ReactGenie: A Development Framework for Complex Multimodal Interactions Using Large Language ModelsProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642517(1-23)Online publication date: 11-May-2024
  • (2024)GazePointAR: A Context-Aware Multimodal Voice Assistant for Pronoun Disambiguation in Wearable Augmented RealityProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642230(1-20)Online publication date: 11-May-2024
  • (2024)The Effect of Directional Airflow toward Vection and Cybersickness2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR)10.1109/VR58804.2024.00103(839-848)Online publication date: 16-Mar-2024
  • (2023)Playing Rock-Paper-Scissors Using AI Through OpenCVThe Software Principles of Design for Data Modeling10.4018/978-1-6684-9809-5.ch004(41-64)Online publication date: 30-Jun-2023
  • (2023)Graphologue: Exploring Large Language Model Responses with Interactive DiagramsProceedings of the 36th Annual ACM Symposium on User Interface Software and Technology10.1145/3586183.3606737(1-20)Online publication date: 29-Oct-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media