[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
article
Free access

Using nonspeech sounds to provide navigation cues

Published: 01 September 1998 Publication History

Abstract

This article describes 3 experiments that investigate the possibiity of using structured nonspeech audio messages called earcons to provide navigational cues in a menu hierarchy. A hierarchy of 27 nodes and 4 levels was created with an earcon for each node. Rules were defined for the creation of hierarchical earcons at each node. Participants had to identify their location in the hierarchy by listening to an earcon. Results of the first experiment showed that participants could identify their location with 81.5% accuracy, indicating that earcons were a powerful method of communicating hierarchy information. One proposed use for such navigation cues is in telephone-based interfaces (TBIs) where navigation is a problem. The first experiment did not address the particular problems of earcons in TBIs such as “does the lower quality of sound over the telephone lower recall rates,” “can users remember earcons over a period of time.” and “what effect does training type have on recall?” An experiment was conducted and results showed that sound quality did lower the recall of earcons. However; redesign of the earcons overcame this problem with 73% recalled correctly. Participants could still recall earcons at this level after a week had passed. Training type also affected recall. With personal training participants recalled 73% of the earcons, but with purely textual training results were significantly lower. These results show that earcons can provide good navigation cues for TBIs. The final experiment used compound, rather than hierarchical earcons to represent the hierarchy from the first experiment. Results showed that with sounds constructed in this way participants could recall 97% of the earcons. These experiments have developed our general understanding of earcons. A hierarchy three times larger than any previously created was tested, and this was also the first test of the recall of earcons over time.

References

[1]
ABOABA, A. 1996. Using sound in telephone-based interfaces. Master's Thesis. University of Glasgow, Glasgow, Scotland, UK.]]
[2]
BADDELEY, A. 1990. Human Memory: Theory and Practice. Lawrence Erlbaum Associates, Inc., Mahwah, NJ.]]
[3]
BARFIELD, W., ROSENBERG, C., AND LEVASSEUR, G. 1991. The use of icons, earcons and commands in the design of an online hierarchical menu. IEEE Trans. Prof. Commun. 34, 2, 101-108.]]
[4]
BAUMGART, D., JOHNSON, J., AND HELMSTETTER, E. 1990. Augmentative and Alternative Communication Systems for Persons with Moderate and Severe Disabilities. Paul Brooks Publishing Co., Baltimore, MD.]]
[5]
BLATTNER, M., PAPP, A., AND GLINERT, E. 1992. Sonic enhancements of two-dimensional graphic displays. In Proceedings of the International Conference on Auditory Display (ICAD '92, Santa Fe, NM). Addison-Wesley, Reading, MA, 447-470.]]
[6]
BLATTNER, M., SUMIKAWA, D., AND GREENBERG, R. 1989. Earcons and icons: Their structure and common design principles. Human-Comput. Interact. 4, 1, 11-44.]]
[7]
BREWSTER, S.A. 1994. Providing a structured method for integrating non-speech audio into human-computer interfaces. Ph.D. Dissertation. University of York, York, UK.]]
[8]
BREWSTER, S. A., RATY, V.-P., AND KORTEKANGAS, A. 1996. Earcons as a method of providing navigational cues in a menu hierarchy. In Proceedings of the British Computer Society Conference on Human Computer Interaction (HCI '96, London, UK). British Computer Society, Swinton, UK, 169-183.]]
[9]
BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1992. A detailed investigation into the effectiveness of earcons. In Proceedings of the International Conference on Auditory Display (ICAD '92, Santa Fe, NM). Addison-Wesley, Reading, MA, 471-498.]]
[10]
BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1993. An evaluation of earcons for use in auditory human-computer interfaces. In Proceedings of the Conference on Human Factors in Computing (INTERCHI '93, Amsterdam, The Netherlands, Apr. 24-29), S. Ashlund, A. Henderson, E. Hollnagel, K. Mullet, and T. White, Eds. ACM Press, New York, NY, 222-227.]]
[11]
BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1994. The design and evaluation of an auditory-enhanced scrollbar. In Proceedings of the Conference on Human Factors in Computing Systems: "Celebrating Interdependence" (CHI '94, Boston, MA, Apr. 24-28), B. Adelson, S. Dumais, and J. Olson, Eds. ACM Press, New York, NY, 173-179.]]
[12]
BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1995a. Experimentally derived guidelines for the creation of earcons. In Adjunct Proceedings of the British Computer Society Conference on Human Computer Interaction (HCI '95, Huddersfield, UK). British Computer Society, Swinton, UK, 155-159.]]
[13]
BREWSTER, S., WRIGHT, P. C., AND EDWARDS, A. D. N. 1995b. Parallel earcons: Reducing the length of audio messages. Int. J. Hum.-Comput. Stud. 43, 2 (Aug.), 153-175.]]
[14]
GAVER, W. 1989. The SonicFinder: An interface that uses auditory icons. Human-Comput. Interact. 4, 1, 67-94.]]
[15]
GAVER, W. W., SMITH, R. B., AND O'SHEA, T. 1991. Effective sounds in complex systems: The ARKOLA simulation. In Proceedings of the Conference on Human Factors in Computing Systems: Reaching through Technology (CHI '91, New Orleans, LA, Apr. 27-May 2), S. P. Robertson, G. M. Olson, and J. S. Olson, Eds. ACM Press, New York, NY, 85-90.]]
[16]
MAQUIRE, M. 1996. A human-factors study of telephone developments and convergence. Contemp. Ergon., 446-451.]]
[17]
ROSSON, M. B. 1985. Using synthetic speech for remote access to information. Behav. Res. Methods Instr. Comput. 17, 2, 250-252.]]
[18]
SCHUMACHER, R. M., HARDZINSKI, M. L., AND SCHWARTZ, A. L. 1995. Increasing the usability of interactive voice response systems. Hum. Factors 37, 2, 251-264.]]
[19]
SLOWIACZEK, L. M. AND NUSBAUM, a. C. 1985. Effects of speech rate and pitch contour on the perception of synthetic speech. Hum. Factors 27, 6, 701-712.]]
[20]
STEVENS, R. 1996. Principles for the design of auditory interfaces to present complex information to blind people. Ph.D. Dissertation. University of York, York, UK.]]
[21]
STEVENS, R. D., BREWSTER, S. A., WRIGHT, P. C., AND EDWARDS, A. D. N. 1994. Providing an audio glance at algebra for blind researchers. In Proceedings of the International Conference on Auditory Display (ICAD '94, Santa Fe, NM). 21-30.]]
[22]
STIFELMAN, L. J., ARONS, B., SCHMANDT, C., AND HULTEEN, E.A. 1993. VoiceNotes: A speech interface for a hand-held voice notetaker. In Proceedings of the Conference on Human Factors in Computing (INTERCHI '93, Amsterdam, The Netherlands, Apr. 24-29), S. Ashlund, A. Henderson, E. Hollnagel, K. Mullet, and T. White, Eds. ACM Press, New York, NY, 179-186.]]
[23]
SUMIKAWA, D.A. 1985. Guidelines for the integration of audio cues into computer user interfaces. Tech. Rep. UCRL 53656. Lawrence Livermore National Laboratory, Livermore, CA.]]
[24]
SUMIKAWA, D., BLATINER, M., JOY, K., AND GREENBERG, R. 1986. Guidelines for the syntactic design of audio cues in computer interfaces. Tech. Rep. UCRL 92925. Lawrence Livermore National Laboratory, Livermore, CA.]]
[25]
WOLF, C., KOVED, L., AND KUZINGER, E. 1995. Ubiquitous mail: Speech and graphical interfaces to an integrated voice/email mailbox. In Proceedings of the 3rd IFIP Conference on Human-Computer Interaction (INTERACT '95, Lillehammer, Norway). Chapman & Hall, Ltd., London, UK, 247-252.]]
[26]
YANKELOVICH, N., LEVOW, G.-A., AND MARX, M. 1995. Designing SpeechActs: Issues in speech user interfaces. In Proceedings of the Conference on Human Factors in Computing Systems (CHI '95, Denver, CO, May 7-11), I. R. Katz, R. Mack, L. Marks, M. B. Rosson, and J. Nielsen, Eds. ACM Press/Addison-Wesley Publ. Co., New York, NY, 369-376.]]

Cited By

View all
  • (2024)Tonal Cognition in Sonification: Exploring the Needs of Practitioners in Sonic Interaction DesignProceedings of the 19th International Audio Mostly Conference: Explorations in Sonic Cultures10.1145/3678299.3678321(218-228)Online publication date: 18-Sep-2024
  • (2024)Towards a Personal Audio Space in Homes: Investigating Future Sound Management with Personal Audio TechnologiesProceedings of the 2024 ACM International Conference on Interactive Media Experiences10.1145/3639701.3656313(276-293)Online publication date: 7-Jun-2024
  • (2024)SoundOrbit: motion-correlation interaction with auditory orbital trajectoriesPersonal and Ubiquitous Computing10.1007/s00779-024-01818-428:5(763-778)Online publication date: 1-Oct-2024
  • Show More Cited By

Recommendations

Reviews

Martha Elizabeth Crosby

Advances in the understanding of how people use computers have led to the greatest innovations in software development. Brewster describes three experiments that show how structured nonverbal audio messages called “earcons” can provide navigational cues in a nonverbal user interface. Earcons are abstract musical tones that use repetition, variation, and contrast of qualities such as timbre, register, intensity, pitch, and rhythm in structured combinations to create sound messages. Experiment 1 used a file system hierarchy of 25 nodes on 3 levels. The participants correctly recalled 81.5 percent of the 14 earcons they heard, showing the viability of an aural interface. Experiment 2 was designed to generalize these results by addressing the influence of sound quality, method of training, and time on the percentage of earcons recalled. Brewster reports that participants could still recall the earcons after a week, but that the quality of sound and the type of training influenced how well they recalled them. Experiment 3 used compound rather than hierarchical earcons to represent the structure from experiment 1. Earcons were created that did not require the participants to remember more than seven rules. The new design improved the number of earcons recalled from 81.5 percent to 97 percent. This type of earcon design has the advantage of creating arbitrarily sized hierarchies, and participants do not need to be retrained for each new structure's size and shape. This paper provides interface designers with valuable information. It will be particularly valuable to those who are responsible for designing complex displays to clearly present large amounts of changing data in ways compatible with users' information needs. In additio n to developers of applications that use telephone-based interfaces, mentioned by Brewster, this work should be of particular interest to designers of multimodal computing environments.

Access critical reviews of Computing literature here

Become a reviewer for Computing Reviews.

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Computer-Human Interaction
ACM Transactions on Computer-Human Interaction  Volume 5, Issue 3
Sept. 1998
118 pages
ISSN:1073-0516
EISSN:1557-7325
DOI:10.1145/292834
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 September 1998
Published in TOCHI Volume 5, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. auditory interfaces
  2. earcons
  3. navigation
  4. nonspeech audio
  5. telephone-based interfaces

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)157
  • Downloads (Last 6 weeks)15
Reflects downloads up to 04 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Tonal Cognition in Sonification: Exploring the Needs of Practitioners in Sonic Interaction DesignProceedings of the 19th International Audio Mostly Conference: Explorations in Sonic Cultures10.1145/3678299.3678321(218-228)Online publication date: 18-Sep-2024
  • (2024)Towards a Personal Audio Space in Homes: Investigating Future Sound Management with Personal Audio TechnologiesProceedings of the 2024 ACM International Conference on Interactive Media Experiences10.1145/3639701.3656313(276-293)Online publication date: 7-Jun-2024
  • (2024)SoundOrbit: motion-correlation interaction with auditory orbital trajectoriesPersonal and Ubiquitous Computing10.1007/s00779-024-01818-428:5(763-778)Online publication date: 1-Oct-2024
  • (2023)Music theoretical approach to auditory interface design: progressive, explainable, and accessibleProceedings of the 15th Conference on Creativity and Cognition10.1145/3591196.3596814(39-42)Online publication date: 19-Jun-2023
  • (2023)Living with Sound Zones: A Long-term Field Study of Dynamic Sound Zones in a Domestic ContextProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581535(1-14)Online publication date: 19-Apr-2023
  • (2022)Grid-Coding: An Accessible, Efficient, and Structured Coding Paradigm for Blind and Low-Vision ProgrammersProceedings of the 35th Annual ACM Symposium on User Interface Software and Technology10.1145/3526113.3545620(1-21)Online publication date: 29-Oct-2022
  • (2022)Accessible Blockly: An Accessible Block-Based Programming Library for People with Visual ImpairmentsProceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility10.1145/3517428.3544806(1-15)Online publication date: 23-Oct-2022
  • (2022)Do You See What I Hear? — Peripheral Absolute and Relational Visualisation Techniques for Sound ZonesProceedings of the 2022 CHI Conference on Human Factors in Computing Systems10.1145/3491102.3501938(1-13)Online publication date: 29-Apr-2022
  • (2022)Evaluation of Preview Cues to Enhance Recall of Auditory Sequential InformationAuditory Perception & Cognition10.1080/25742442.2022.20952365:3-4(282-299)Online publication date: 30-Jun-2022
  • (2022)Grouping and Determining Perceived Severity of Cyber-Attack Consequences: Gaining Information Needed to Sonify Cyber-AttacksJournal on Multimodal User Interfaces10.1007/s12193-022-00397-z16:4(399-412)Online publication date: 1-Nov-2022
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media