Abstract
Interactive computer simulations are effective learning tools commonly used in science education; however, they are inaccessible to many students with disabilities. In this paper, we present initial findings from the design and implementation of accessibility features for the PhET Interactive Simulation, Balloons and Static Electricity. Our focus: access for screen reader users. We designed an interaction flow that connected keyboard interactions with reactions in dynamic content. Then using a Parallel Document Object Model (PDOM), we created access to simulation content and interactive sim objects. We conducted interviews with screen reader users to evaluate our progress, and to understand better how they engage with interactive simulations. We share findings about our successes and challenges in the design and delivery of dynamic verbal text description, of efficient keyboard navigation, and the challenges we faced in making a keyboard accessible drag and release mechanism for a highly interactive simulation object, a Balloon.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- Web accessibility
- Usability
- Blind users
- Inclusive design
- Non-visual user interface
- Parallel document object model
- Keyboard interaction
- Text description
- Educational simulation
- Interactive science simulation
1 Introduction
Interactive computer simulations are commonly used science education resources shown to be effective in supporting student learning [1, 2]. Interactive simulations allow students to investigate scientific phenomena across a range of size and time scales, and allow for experimentation when physical equipment is either not available or not accessible to the student. While the use of simulations has been shown to benefit student learning, they are often inaccessible to students with disabilities. Interactive simulations are generally highly visual and designed for mouse- or touch-driven interactions– making them particularly inaccessible to students with vision loss.
The PhET Interactive Simulations project [3] has created a popular suite of over 130 interactive science and mathematics simulations. These highly interactive simulations (or “sims”) are run over 75 million times a year by teachers and students around the world, and are pushing the capabilities of web technologies and standards to their limits. In this paper, we present findings from the design and implementation of accessibility features for the PhET sim Balloons and Static Electricity [4]. Our goal was to make this sim accessible and usable by screen reader users. In the process, we addressed challenges in the delivery of dynamic content and interactions, design of efficient keyboard navigation and operation, and user interaction with complex sim features. We conducted interviews with screen reader users to evaluate our progress, and to understand better how screen reader users engage with interactive simulations. We found that when access is successful, user engagement and learning can take place.
2 PhET Sim: Balloons and Static Electricity
The Balloons and Static Electricity sim (Fig. 1A, B) can be used to support student learning of topics related to static electricity, including transfer of charge, induction, attraction, repulsion, and grounding. This sim is used in classrooms from middle grades up to introductory college level, with students from age 10 to adult. Upon startup, the user encounters the sim’s Play Area, containing a Sweater on the left side, a centrally located Balloon, and a Wall on the right side. Representations of positive and negative charges are shown overlaying all of these objects. At the bottom of the screen is the Control Panel area, including: a set of three radio buttons that control what charge representations are shown (all charges, no charges, or charge difference), a toggle switch that allows the user to change between experimenting with one Balloon or two, a Reset All button that resets the screen to its initial state, and a Remove Wall button that adds or removes the Wall.
The Balloon can be moved and rubbed against the Sweater (resulting in a transfer of negative charges from the Sweater to the Balloon) and the Wall (resulting in no transfer of charges). Releasing the Balloon results in the Balloon being attracted to the Sweater or Wall, depending on the total amount of charge present on the Balloon and its proximity to either the Sweater, or the Wall. For example, rubbing the Balloon on the Sweater results in a transfer of negative charges from the Sweater to the Balloon, and the now negatively charged Balloon, upon release from the middle of the Play Area, is attracted to (moves toward and “sticks” to) the now positively charged Sweater. Releasing the Balloon near the Wall may result in the Balloon attracting to the neutral Wall (Fig. 1B) or attracting back to the Sweater.
In the original sim all interactions, including moving the Balloon and activating buttons and radio buttons, were mouse or touch events. No verbal description of visual representations or dynamic changes were provided.
3 Accessible Design Features
To provide access for screen reader users, we implemented the following enhancements.
3.1 Access to Sim Content and Interactions
To make the content and interactions of the sim accessible to assistive technologies (AT), we designed a semantically rich HTML-based hierarchical representation of the sim that describes all objects and interactions. We refer to this accessible feature as the Parallel Document Object Model (or “PDOM”). The reasoning for the PDOM approach has been addressed previously [5]. In this work, we enhanced the PDOM with the rich semantics available in HTML. Through native semantics, the use of headings, and the linear order of elements, we created a hierarchy that conveys the spatial layout of the sim and the relationships among the sim objects. This structure makes it possible for screen reader users to perceive these relationships as they explore the sim, gaining an understanding of how the relationships relate to the interactions in the sim. For example, the Play Area contains three objects: the Balloon, the Sweater, and the Wall. We communicate that the objects have an important relationship through their heading structure. Each object’s label (or name) is marked up as an H3 heading. The heading for the Play Area is an H2, conveying that it is the parent of these sibling objects. Details about each object are contained in a paragraph under each of the respective objects’ headings. Design features that provide visual users with clues, within the design itself, on how to interact with the sim are referred to as “implicit scaffolding” [6]; providing hierarchical structure and a Tab order (Fig. 1C, circled numbers) that is based on pedagogical importance is an attempt to provide implicit scaffolds for screen reader users.
3.2 Keyboard Navigation and Operation
The PDOM described in the previous section provides meaning through heading hierarchy. It also provides a mechanism for efficient keyboard navigation and operation via navigable elements such as landmarks and regions. With screen reader commands, users can efficiently navigate by landmarks, regions, or headings. In Balloons and Static Electricity, the Scene Summary, Play Area, and Control Panel were coded as navigable regions (HTML section element) that each start with an H2 heading. With this structure, a screen reader user can navigate to the Play Area either with the region command or via the heading, thus providing efficient navigation from anywhere in the sim. We employed native HTML form controls and standard interaction design patterns [7], in order to create interactive sim objects that were findable and operable by users. For example, all interactive sim buttons are real HTML buttons and recognized by screen readers as such. They are reachable via the keyboard with the Tab key and can be operated upon (activated) by pressing either the Spacebar or the Enter key.
3.3 Timely Description that Connects Interactions with Dynamic Content
Descriptions of changing information such as Balloon charge, Balloon position and Balloon behavior (direction and velocity during attraction and repulsion) must be delivered in a timely fashion while minimizing disruption. Our approach involved announcing dynamically changing charge information using ARIA live regions [8]. Live regions provide a way for screen readers to present new information that occurs away from where the user is currently reading (or has focus). For example, when a user rubs the Balloon on the Sweater, a transfer of charge occurs causing changes in the descriptive content associated with both the Balloon and the Sweater. Through the use of live regions the user is made aware of changes in charge levels for both objects, even though the user technically is only “reading” the Balloon. In designing our descriptive text strings, we aimed for brevity, consistency and clarity [9].
3.4 Keyboard Interaction and Engagement with Balloon
In order to explore the sim with a screen reader, the learner needs to be able to grab the Balloon, drag it to different locations, rub it on the Sweater (or Wall), and release it (to see how it attracts and repels) using keyboard interactions. To achieve this, we created the following mechanisms:
-
1.
Grab, Drag & Rub Interaction. These interactions are integrated into one (similar to the mouse-driven grab, drag, and rub interaction). To grab, drag, and rub the Balloon, the user navigates keyboard focus to the Balloon and then presses one of a set of four directional movement keys, the W, A, S, or D keys. These keys correspond to up, left, down, and right movements of the balloon, respectively. These keys were selected as they are commonly used for directional movement in the computer gaming community. The sim includes a description of the interaction so it can be used (learned) without prior gaming experience. Note, our initial design utilized Arrow keys for directional movement, but unfortunately, the Arrow keys already have assigned meaning (as cursor keys) essential to screen reader control.
-
2.
Release mechanism. We provided three ways to release the Balloon: Spacebar, Control + Enter and Tab. The Spacebar was chosen for its alignment with the established interaction of submitting a form. Pressing Control + Enter is another standard way to submit a form, so for consistency this key press combination was implemented as a release for the Balloon. Pressing the Tab key moves focus away from the Balloon, and as a result (intentional or unintentional) must release the Balloon so that the non-visual interactions (and representations) remain in sync with the visual representations.
-
3.
Balloon interaction keyboard shortcuts. We implemented the Shift key as a semi-modal acceleration key so that the user can make the Balloon move in larger increments (in the chosen direction). We also designed four, letter-based (non-case-sensitive), hotkey combinations to jump the Balloon to pedagogically strategic locations in the Play Area: JW (to Wall), JS (to edge of Sweater), JN (to Near Wall) and JM (to Middle of Play Area). By using pairs of keys for the hotkeys, we avoided conflicts with browser hotkey functionality [9]. To address other potential conflicts with screen reader functionality, we used the ARIA role application on the Balloon. The application role informs the screen reader to pass key presses to the web application (the sim) for an alternate purpose. Note, this approach does not work for the Arrow keys, but does work for letter keys – many of which are used as hotkeys for screen reader navigation.
4 Iterative Usability Evaluation
In order to test and refine our designs, we conducted a series of interviews with blind users. We asked users to explore the sim and, while interacting, to “think aloud” [10, 11]. Between interviews we made modifications to the software in response to user experiences.
4.1 Methods
Participants.
We recruited 12 screen reader users to participate in interviews – and conducted 11 in-person interviews and 1 remote interview. The users, 5 women and 7 men, spanned a diverse age range (19 years to 61 years). Users demonstrated a diverse level of expertise with their screen reader: one user used both a refreshable Braille display and a screen reader. All users had at least some post-secondary education, the youngest being in their first year of college.
Apparatus.
We gave the 12 users the option to use their own computer or one that we provided. Overall, the hardware and software setups were varied:
-
Hardware: desktop PCs (2), Mac Air (2), Surface Pro 3 Tablet (1), Lenovo E541 (1), Lenovo T520 (6)
-
Browsers & Screen readers: Chrome & JAWS 17 professional (1), IE11 & JAWS 17 Home (1), Firefox & JAWS 17 Home (1), Firefox & JAWS 17 demo (5), Firefox & JAWS 15 (1), Safari & VoiceOver (2) and Firefox & NVDA 2015 (1)
Each interview was video recorded, with the camera positioned to capture the participant’s screen and keyboard.
Procedure.
Most interviews took approximately 1 h. Each interview proceeded as follows:
-
1.
Describe the process and outline the order and components of the interview.
-
2.
Ask background questions regarding interests in science, demographics, educational background, system specifications, habits on daily computer use, use of AT, level of expertise with AT, and online education.
-
3.
Explain the state of the prototype that we would be using. For example, it was necessary to explain that the prototype was fully keyboard accessible, but that some parts of the verbal descriptions were not yet implemented. Thus, information about the sim would be coming from both the screen reader and the interviewer, who would be playing the part of the screen reader for yet-to-be-implemented descriptions. This aspect was inspired by the Wizard of Oz method [12]. While speaking out the live description, the “wizard” followed a planned description script (where possible), but improvisation was required at times.
-
4.
Describe the Think Aloud Protocol (TAP) and conduct a TAP warm-up exercise.
-
5.
Introduce use of the sim as a learning activity by asking users to imagine they were in middle school science class starting a unit on static electricity and that their teacher had given them this sim to explore.
-
6.
Provide access to the sim prototype via a link or a downloaded file.
-
7.
Ask the user to freely explore for 20–40 min and to think aloud while doing so. As the user explores, the interviewer/wizard provides live descriptions of unimplemented dynamic content, and occasionally reminds user to think aloud.
-
8.
Ask follow-up questions to gain an understanding of their perspective and thoughts on the experience and suggestions for how to improve the design.
5 Discussion & Results
Analysis of user interviews provided significant insight into effective (and ineffective) design approaches for making the interactive sim, Balloons and Static Electricity, accessible to visually impaired users. We describe here, for each inclusive design feature described in Sect. 3, what worked well and what challenges were found.
5.1 Access to Sim Content and Interactions
We found the Parallel Dom (PDOM) to be an effective approach for providing access to sim content and interactive (i.e. controllable) sim objects.
What Worked Well.
-
Some sim content is static, meaning it does not change or changes very little, and some content is dynamic and changes a lot (see Sect. 5.3). Users were able to easily access, review and locate all static content and some dynamic content in the sim. All users accessed content successfully with the Arrow keys line by line. Most used a combination of strategies in addition to the Arrow keys. Access to this content is a significant achievement that allowed most users to explore, ask their own questions, and experiment to answer their own questions.
-
When first encountering the sim, most users employed a strategy that consisted of first listening and then interacting. This behavior of listening before interacting is consistent with prior research with blind users [13, 14]. Some users listened just to the brief Scene Summary that introduces the sim and then took the suggested navigation cue (Press Tab for first object) at the end of the Scene Summary. Some listened to everything in the sim at least once before interacting with the Tab key or activating one of the buttons in the Control Panel. Both strategies were effective.
Challenges.
-
Some users encountered challenges that they were not easily able to overcome. For example, one user’s navigational approach involved listening to descriptions (sometimes listening to descriptions in full, other times listening to descriptions minimally), then using the Tab key to navigate quickly around the sim, and then listening again – without seeming to set any specific goals for exploration. In this case it seemed the descriptions were not supporting the user to find a productive path of exploration so her navigation seemed aimless.
-
Browser implementation inconsistencies led to some confusion about interactions. For example, with Safari, VoiceOver reads out the Balloon as, “Application 3 items. Yellow Balloon”. Upon hearing a number of items, one user tried, unsuccessfully, to interact with the Balloon as if it were a list.
-
We learned how to optimize label and description text as we understood more fully how the interactive objects were read out by screen readers. Label text is the essential information for the control and is always read. Description (or help) text is additional information that can help the user understand what to do with the interactive object. Descriptions can be read out by default along with the label text or not. Changes to label and description text were made throughout the project and these changes improved the auditory experience in two ways: reduced screen reader verbosity and improved clarity. We found it useful to optimize label text to reduce the use of help text.
5.2 Keyboard Navigation and Operation
In this category, we also found the PDOM approach to provide affordances that supported effective keyboard navigation and operation.
What Worked Well.
-
The PDOM approach allowed users to employ strategies developed from past experience to explore and interact with the sim. With the content structured and accessible in familiar ways, users were provided full agency to independently solve problems that arose – including science learning and technical challenges. For example, one user utilized the Tab key to navigate through the sim twice, while listening minimally to descriptions. Without the descriptions, she did not have enough information to successfully explore, and eventually changed her strategy. Her second strategy involved using a screen reader command to bring up a list of all headings. From there, she chose to navigate to the Scene Summary and began listening, ultimately resulting in her proceeding along a more productive path. In an example of a strategy change in response to a technical issue, one user encountered a technical issue where using Tab or Shift-Tab did not appropriately navigate away from the Reset All button. In one case the user made use of the Arrow keys to navigate away from the button, while in another case they used a screen reader navigation command (the B key in the JAWS screen reader) to navigate to the next button.
-
In general, users found navigation and operation of common controls (e.g., buttons and dialog box) to be straightforward. If the label text was clear and read out correctly by the screen reader, the users seemed to know how to interact based on prior web experience.
Challenges.
-
Navigation cues (telling the user explicitly what to do) were sometimes helpful, but significantly increased screen reader verbosity. Some users missed navigation cues by not listening long enough. Providing cues on demand may be a better approach.
-
Some navigational cues were poorly placed which led to unsuccessful interaction attempts. We found that navigational cues need to be operable at precisely the same time that they are delivered.
5.3 Timely Description that Connects Interactions with Dynamic Content
We found that connecting interactions with changing content helped to create a successful interaction flow.
What Worked Well.
-
All users understood that something changed when they rubbed the balloon on the sweater. Most perceived and understood that the overall charge had changed from neutral to positive (Sweater) or negative (Balloon). Only some noticed the charge description update, “a few more”, “several more” and “many more”. We chose this three-point relative scale to convey charge levels because it is the relative amount of charge, not the total number of each charge type, that is foundational to the underlying concepts. One user commented that a relative scale was useful, but another participant commented that the difference between “several more” and “many more” was too subtle. At least two users said that a numerical value for the level of charge would be more useful.
-
Live description, though difficult to execute, worked well to test out a complex description plan for comprehensibility, usability and effectiveness before implementation. As part of the live description, some sound effects were produced by rubbing an actual balloon to indicate Balloon on Sweater and hitting the balloon to indicate reaching the Wall. These sounds received positive reactions from some users. However, live sounds were difficult to execute, and were not presented consistently. Further research will explore the use of sounds to augment verbal descriptions.
-
Announcing changes to the Play Area when a user activated a button (e.g. “Wall removed from Play Area”) was clearer to users than listening only to the changed button text.
Challenges.
-
We found certain descriptions were particularly challenging for some users, and need to be refined. The description including “no more negative charges than positive ones” was interpreted by one user as no charge at all, rather than a net zero, or neutral charge. Not describing positive charges caused some users to think that the balloons had no positive charges at all, rather than the intended goal of cueing users that the negative charges were more relevant than the positive charges for exploration. The description of induced charge in the wall was misunderstood by a few users. These users thought that the Wall was actually repelling the Balloon when they heard “[…] negative charges in the wall are repelling away” from the negative charges in the Balloon.
-
There were some implementation issues that need to be addressed. For example, one user was confused when they came across the Wall via the Arrow keys directly after intentionally removing it. Details in the Scene Summary sometimes led to confusion as they were not implemented to update dynamically. If users re-read the brief Scene Summary after interacting, the information was no longer aligned with the current sim state.
-
Some descriptions needed to be more succinct and new or changed information needed to come first. Details about charges were missed if a user did not listen to the full update. One user said, “There is a lot of talking going on. I have to be honest, I tend to tune it out.” This user repeatedly stopped dynamic updates prematurely, and as a result sometimes missed important details.
-
We found capturing certain object behavior in strings of text to be particularly challenging. For example, a Balloon with a small net negative charge will attract to the Sweater slowly at first, speeding up as it gets closer. This behavior involves continuous change over distance and time, while text is better for describing change occurring in discrete units.
5.4 Keyboard Interaction and Engagement with Balloon
The Balloon object presented an interesting interaction design challenge. Ultimately, we want users to easily understand how to grab, drag, rub and release it with as little explanation as possible. The challenge is that there is no single HTML form control (or ARIA role) that provides a way to increment and decrement two separate values (Balloon position x and y) by simply operating the Arrow keys. In other words, it is difficult to represent the Balloon in code in a way that users will intuitively understand how to interact with it. We tried different types of HTML input controls, all in combination with the ARIA role application, to achieve the required keyboard interactions. All users were eventually successful in grabbing, dragging, rubbing, and releasing the balloon – though several needed guidance from the interviewer. An analysis of how to optimize implementation of the Balloon and how to best describe the interactions is ongoing.
What Worked Well.
-
We found the directional movement keys (W, A, S, and D) to be an understandable alternative to the Arrow keys. Three users needed no additional explanation; some users were curious about our choice of these keys, but nevertheless easily used them. One user exclaimed, “Oh, they are just like the Arrow keys!” commenting on the layout of the keys on the keyboard. Only the first user trying these keys had significant trouble mastering their use. An improvement to the description of the interaction seemed to improve understanding for subsequent users.
-
The Spacebar and Tab key, as release mechanisms, were quickly learned and used repeatedly by all users. Other than some surprise with the Tab key, e.g., “I keep forgetting that when I tab away, I release the balloon,” the interaction was understandable. There were no issues with the Spacebar. One user mentioned that Spacebar is used in some computer games to pick up and drop objects, confirming our choice for the Spacebar as a useful release mechanism for the Balloon.
-
The jump hotkey combinations, (e.g. JS, JW) appear to be quite understandable and memorable. One user commented “It’s like using J like a Shift key. Those commands make sense.” This user did not actually employ the hotkeys; regardless, during the wrap-up questions, they were able to correctly recall three out of four of the hotkeys. Another user made extensive use of the jump hotkeys.
-
The Balloon acceleration operation (Shift key plus a direction key) showed promise as a useful way to move the Balloon more efficiently; however, its initial effect was found to be negligible. We have since increased the amount of acceleration the Shift key provides.
Challenges.
-
The pronunciation of “W, A, S, and D keys” in the interaction cue was not clear with a high screen reader speed. “D” sounded like “T”.
-
Instructions for the jump hotkeys and the accelerator key were not easy to find. They were only available at the bottom of the Keyboard Commands help dialog. Moving the information to the top of the dialog will likely improve discoverability.
-
Screen readers announce aspects of the Balloon that are not directly meaningful to users. For example, “Application. Yellow Balloon. Three items”, or “Application. Yellow Balloon. Draggable. Read-only.” Some users were more tolerant of this verbosity than others. Decreasing this verbosity by improving the Balloon’s representation in code is currently in progress.
6 Conclusions
We faced a number of challenges in the design of a screen reader accessible interactive science simulation, Balloons and Static Electricity. The main challenges, determined by a study of 12 blind users, related to the delivery of complex descriptions in dynamic situations, and the lack of a native role for the main interactive sim object, the Balloon. In spite of the challenges reported here, all the users were excited about the research and their participation in the research.
The web standards (HTML and WAI-ARIA) that pertain to making highly interactive web applications accessible are complex and evolving. These standards are implemented inconsistently by browsers and screen readers, complicating our implementation approaches. Cases where native elements and roles could be directly applied to interactive sim objects seemed to be the easiest for users to discover and utilize.
The outcome of our efforts, thus far, is an interactive simulation prototype that is entirely operable by keyboard. Visual users who use alternative input devices such as a switch or joystick to browse the web can now access, operate, and learn with our prototype. Many sim features are now technically and functionally accessible for visually impaired users. Future work will focus on sonification (the use of non-speech sound to convey information), complementing ongoing work on a more complete description strategy.
References
Rutten, N., van Joolingen, W.R., van der Veen, J.T.: The learning effects of computer simulations in science education. Comput. Educ. 58, 136–153 (2012)
D’Angelo, C., Rutstein, D., Harrison, S., Bernard, R., Borokhovski, E., Haertel, G.: Simulations for STEM Learning: Systematic Review and Meta-Analysis. Technical report, SRI International (2014)
PhET Interactive Simulations. http://phet.colorado.edu/
Balloons and Static Electricity – PhET Prototype Simulation. http://www.colorado.edu/physics/phet/dev/html/balloons-and-static-electricity/1.2.0-accessible-instance.11/balloons-and-static-electricity_en.html?accessibility
PhET Interactive Simulations: Accessibility. http://phet.colorado.edu/en/about/accessibility
Podolefsky, N.S., Moore, E.B., Perkins, K.K.: Implicit scaffolding in interactive simulations: design strategies to support multiple educational goals. Cornell University Library. [physics.ed-ph] (2013). arXiv:1306.6544
Scheuhammer, J., Cooper, M., Pappas, L., Schwerdtfeger, R.: WAI-ARIA 1.0 Authoring Practices, March 2013. http://www.w3.org/TR/wai-aria-practices/
Craig, J., Cooper, M.: Accessible Rich Internet Applications (WAI-ARIA) 1.0, March 2014. https://www.w3.org/TR/wai-aria/
Keane, K., Laverent, C.: Interactive Scientific Graphics Recommended Practices for Verbal Description. Research, Wolfram Research Inc., Champaign, IL (2014). http://dgramcenter.org/accessible-dynamic-scientific-graphics.html
Maximova, S.: J, K, or How to Choose Keyboard Shortcuts for Web Applications, November 2013. https://medium.com/@sashika/j-k-or-how-to-choose-keyboard-shortcuts-for-web-applications-a7c3b7b408ee#1.mrzwq3n1q
Chandrashekar, S., Stockman, T., Fels, D., Benedyk, R.: Using think aloud protocol with blind users: a case for inclusive usability evaluation methods. In: Proceedings of 8th International ACM SIGACCESS Conference Computers and Accessibility, pp. 251–252. ACM Press, Portland, Oregon (2006). http://portal.acm.org/citation.cfm?doid=1168987.1169040
Lewis, C., Rieman, J.: Task-centered user interface design: a practical introduction. Copyright Lewis, C. and Reiman, J. Boulder, Colorado (1993). http://hcibib.org/tcuid/
Green, P.: The Wizard of Oz: A Tool for Rapid Development of User Interfaces. Final report (1985)
Fakrudeen, M., Ali, M., Yousef, S., Hussein, A.H.: Analeysing the mental modal of blind users in mobile touch screen devices for usability. In: Proceedings of World Congress on Engineering vol. II, WCE 2013, London, U.K., July 2013
Kurniawan, S.H., Sutcliffe, A.G., Blenkhorn, P.L., Shin, J.E.: Investigating the usability of a screen reader and mental models of blind users in the Windows environment. Int. J. Rehabil. Res. 26, 145–147 (2003)
Acknowledgements
We would like to thank Jesse Greenberg (PhET software developer) for his significant implementation efforts and design insights. We would also like to thank Shannon Fraser and Sambhavi Chandrashekar for support during the interviews. Equipment and space for interviews was provided by DELTS Media Services (thanks to Darcy Andrews and Mark Shallow) at Memorial University and by the Inclusive Design Research Centre (thanks to Vera Roberts and Bert Shire). Funding for this work was provided by the National Science Foundation (DRL # 1503439), the University of Colorado Boulder, and the William and Flora Hewlett Foundation. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer International Publishing Switzerland
About this paper
Cite this paper
Smith, T.L., Lewis, C., Moore, E.B. (2016). A Balloon, a Sweater, and a Wall: Developing Design Strategies for Accessible User Experiences with a Science Simulation. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. Users and Context Diversity. UAHCI 2016. Lecture Notes in Computer Science(), vol 9739. Springer, Cham. https://doi.org/10.1007/978-3-319-40238-3_15
Download citation
DOI: https://doi.org/10.1007/978-3-319-40238-3_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-40237-6
Online ISBN: 978-3-319-40238-3
eBook Packages: Computer ScienceComputer Science (R0)