Abstract
The DEED (design-based evidence collecting and evidence-based design thinking) model offers a structure in which designers and scientists can effectively support one another in the development of both design and knowledge. The model offers one possible implementation of the applied and basic combined strategy to research [1]. DEED offers a design strategy that
-
(1)
immediately - supports design; in the
-
(2)
short term - supports organizational/collective improvements; and in the
-
(3)
long term - adds to general knowledge to support society as a whole
-
(4)
all while ensuring that researchers do not interrupt the design process, and
-
(5)
scales well for small and large organizations.
This paper introduces the DEED model, its stages, and explores the distinction between design thinking and the design thinking process. The DEED model is an example of the latter, and is a strategy to gain deep knowledge by building on contemporary design strategies. The DEED model anticipates potential points of concern between designers and scientists working in collaboration and offers a structure to support risk-taking and innovation in a manner that may not be typical of a design process with researcher involvement. DEED offers a robust strategy to incrementally increase general knowledge, and to pointedly improve design.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
- Design thinking
- Collaboration
- Work-flow
- Innovation
- Qualitative and quantitative research
- Applied and basic research combined
1 Introduction
What may once have a useful distinction between science, humanities, and design now marks a shallow boundary between blending pursuits. Science had served to understand the natural world [2], but “natural” performance now also includes computer-mediated behaviour. In our lab, we collect video game data in order to better understand human learning [3,4,5], and although the end goals of our research differ from game designers and humanities researchers, these goals are achieved through similar means. Understanding the domain, the user, and the context are critical to all three pursuits. These overlapping pursuits create opportunity for all parties to achieve understanding together beyond what each field could do on its own.
Building on this, we present the DEED (Design-based evidence collection and evidence-based design thinking) process model to implement the “Applied and Basic Combined” approach [1]. The DEED model is an extension of a basic agile design strategy, which retains critical design stages to support rapid prototyping and development. The novel contribution of DEED as a design thinking process model is the built-in research stages. Findings from the research stages inform the product/service under development, and can be shared among the organization to support progress across multiple, related projects. Additionally, the basic findings that come out of research can be used to learn about the natural world generally, over a longer time scale.
The model maintains separate stages to support independent thought, and connects those stages to support systematic collaboration. Sensitive to the concerns of some designers, the DEED model constrains research to a subset of the process and ensures that creatives and developers have sufficient space to innovate. Anyone can use the DEED model, from a single entrepreneur to a large, established organization with multiple specialists or departments for each stage. While basic science usually strives to understand fundamental principles, applied science more typically focuses on practical solutions to problems. On the journeys to their respective goals, though, applied and basic science often share methodology, resources, skills and sub-goals such as collecting data or understanding the ways that different variables play a role in outcomes. Combining basic and applied science [1] is critical for the research stages in the DEED model, where the data from the design process can be leveraged to make general inferences, but also can be used to directly inform the product or service under development. The proposed model explicitly includes research as part of the design process, and implicates design as being integral to the research process. This is evidence-based design thinking: it values research to ensure that the design works. Larson describes the design process itself as experimental (2005), and building off this sentiment, the DEED model encourages design and research teams to use both qualitative and quantitative methods in the Collaborative Gear (see Fig. 1). The Collaborative Gear (modified from [6]) contains specific stages dedicated to research, which makes it easier for the design team to support and inform researchers about concerns or risks taken during the content creation stage that researchers can then explore. With this emphasis on team work, the DEED model affords a safe way to take design and innovation risks. This DEED framework encourages designers to actively participate in the research process and researchers to understand more completely the design decisions upon which their test object is built. Similar stages already exist in most rapid prototyping design models; paper sketches and low-fidelity prototypes are the basis for early qualitative research, while high fidelity prototypes are the basis for later quantitative research. DEED formalizes the role of both qualitative and quantitative research as being valuable steps throughout the design process (Table 1).
Design will benefit from this arrangement long term as researchers learn more about behaviour outside of traditional laboratory tasks and can apply it back to future design iterations. Using basic research to eliminate inviable approaches saves time and effort from being spent on methods which produce less accurate or even misleading results. To do this, basic research needs data which can only be provided with the help of the design team: this is design-based evidence collection. By constraining the contribution of a researcher to specific stages in an adapted agile design strategy, it’s unlikely that research impedes innovation, but more likely that the researcher maximizes intellectual gains by being involved in the design process. Including research as part of the rapid prototype stage offers an opportunity for designers and researchers to collaborate and communicate over each iteration to foster more effective collaboration strategies [7]. If designers are sensitive to the goals of more basic research, they can support information gathering in natural environments that are specifically designed to allow users to achieve their goals unimpeded. In doing so, researchers gain insight into the natural behaviour of people to better understand cognition and decision making.
The DEED process supports both knowledge discovery and effective design through:
-
(a)
market research
-
(b)
content creation
-
(c)
repetitions of the Collaborative Gear
-
(d)
final prototype
Organizationally, a project manager can set a threshold for key performance indictors (e.g. enjoyability, efficacy measures). These are measured in the quantitative research test, where, should the results of user testing suggest the key performance threshold is met, the cycle advances for the final time. In each round, the researcher’s responsibility is to ensure that the data required to perform their research is available. The test is both a test of the product and of the process. The design structure can be made leaner by constraining the maximum number of cycles and participants [8], or it can be made more research-intensive by setting more sensitive advancement thresholds.
The DEED model includes nine distinct stages (see Fig. 1): market research, content creation, sketching, qualitative research, wireframing, visual design, prototyping, quantitative research and the final prototype. Market research and content development precede the Collaborative Gear which contains everything from sketching to quantitative data, all of which can be repeated as needed to reach a threshold that is established by the design team or a product manager. The threshold can be defined by key performance indicators as indicated by the design problem. Indeed, the design process should only enter the Collaborative Gear if it is discovered that there is a real problem for which a solution can be attained. If meeting that goal can be quantified or measured qualitatively, that is an ideal measure for which the progression to the final prototype might be allowed. The stages of the model are explored in more detail below in Sect. 5.
2 Motivation
“Contemporary research teams get a further boost from fresh ways of using the Web, social media, and visual communications tools that amplify collaborations [1].”
- B. Shneiderman
The DEED design process model, if applied widely, offers benefits spanning immediate product improvements and general, societal-level advantages of a well-informed populace. Our initial motivation, as cognitive psychology researchers, was an academic one: to better understand human cognition. Our experience with programming experiments to be as user-friendly as possible, and collecting video game data to inform real-world motivated behaviour made it clear that human-computer interaction problems are similar to human cognition problems. Both domains share methods, equipment, and populations of interest; both domains have a lot to offer each other.
2.1 Practical Considerations
Having someone with knowledge of the scientific method as an integrated part of the design team is advantageous. A team member who knows about strategically manipulating critical components of interest over rapid iterations, can help to inform causal claims between them. Simultaneously implementing all of the test group’s user feedback, and/or corrections for observed usability problems is poor practice. Changing everything at once does not inform the scientist or the designer as to the actual source of the user frustration or the usability error. Changing one feature at a time and observing its effect is much more informative. The scientist may generalize the findings to support or refute a scientific theory. The designer will know what to do in future work.
Further, making step-by-step iterations avoids problems of “throwing the baby out with the bath water”, in that there might be a really useful feature that users are not interacting with because of some co-occurring issue. If the feature is excluded based on observation of a group of people who ignored it for a myriad of potential reasons, the interface may not be as strong as it could be. For example, a search feature on a website might not be easy to find in the first design iteration (a placement flaw, or aesthetic issue, perhaps). Removing it entirely would be poorly advised; when instead, having it placed more effectively or tweaking its appearance could make it accessible.
Extreme caution is advised in overriding the intuitions of artists to align with potentially relevant scientific claims, though. Throughout history, artists and poets have communicated real psychological phenomena through their art work, sometimes making observations about human perception which would not be studied scientifically for some time after their discovery in artistic mediums. Brunelleschi knew how to invoke depth perception in a two-dimensional painting in 1415 [9]. Oculomotor vergence, the underlying mechanism enabling this phenomenon, was not even a topic conversation for another four and a half centuries in the scientific community [10]. With the rapidity of both artistic and scientific advances, and the openness of communication the information age affords, this latency between artistic insight and scientific principles is expected to shrink moving forward [11]. With this in mind, artists may shed light on interesting behavioural phenomena exhibited by their audiences before scientists are even looking that direction. Squashing artistic pursuits might harm scientific advancement as odd or interesting phenomena may be entirely overlooked if the artist does not have room to explore their own domain.
To this point, it should be clear that designers and researchers are well positioned to help each other. A solution to the potential tension between researchers and designers on a team is to make the role of both explicit: researchers and designers share the same ultimate goal in better understanding the user’s experience. The focus within that goal differs, and that difference is a productive one.
2.2 Theoretical Background
Applying concepts from psychological science to industry problems seems like a relatively straightforward problem at first pass: learn something about people, check that it happens consistently, and communicate it to the industry to fold it into design strategies or into products that consider human psychology. It’s probably never that clean. Learning about human behaviour in the lab does not often allow for confident predictions about analogous behaviour in the wild. Learning about human behaviour in the lab does not even offer much confidence about predicting human behaviour in the lab. A large-scale replication effort by the 270 contributors of the Open Science Collaboration repeated 100 psychological studies. A meagre 36% of the replication efforts yielded significant p values in the direction that were originally reported [12]Footnote 1. If direct replications of experiments fail more often than not, it’s unsurprising that industry applications of psychological findings may fail to meet their mark. It’s important for scientists to make their data open so these verifications are made possible, even if — or especially when — working with design and industry partners.
The replicability of psychological findings aside, another concern with their application is that the scalability of the findings is tenuous. Even the Posner cuing task, a mainstay in attention labs, suffers when the cues are changed to be more realistic [13]. Central cues appear to draw reflective attention when they’re arrows or eyes, regardless of how predictive they are. There’s a reaction time advantage for when a left-pointing arrow predicts a target to appear on the left side of the screen, even when that arrow is only helpful on 50% of trials. This counters the assumption that cues will draw attention only when they’re actually predictive of the target location. Having eyes and arrows rather than boxes as a cue is hardly a big step in scaling a problem up, yet the paradigm appears to break down even with these minor manipulations.
Another issue of applying basic attention findings to interfaces are cases where real-world problems have multiple modes of stimulation. Findings of visual attention do not necessarily scale cleanly to multimodal interfaces. Saccades are modified by congruent tactile stimulation [14], such that eye movements are made to a target more quickly when matched with a tactile stimulus in the same direction. Learning is supported by multi-modal integration, as evidenced by participants’ reduced timing errors when provided with auditory and visual feedback rather than visual feedback alone [27]. Applying attention findings to interface design isn’t always a multimodal problem, but most interfaces are improved with the integration of an auditory component and much of what we know about visual attention is based upon findings isolating the visual modality.
From the design side, it’s challenging to test the efficacy of scientifically-principled features if the test groups assessing early versions of an interface or game are unaware of their own limitations. People are apt to overestimate their cognitive and perceptual abilities, thinking that they would notice items manipulated in a change blindness task even when observers all fail to do so [15]. Participants involved in alpha testing of an interface might be frustrated with informative components of the display and deem them unnecessary even though these extra sources of information may minimize inattentional blindness, change blindness or other quirks of the visual system. Participants’ metacognitive failures in overestimating their abilities might mean that an alpha testing group tells the design team that a component of the design is unnecessary, and so the responsive development team might opt to eliminate it during the next iteration. These same interface elements or design choices that are maligned by participants and subsequently excluded may actually help people avoid failures of the visual system - failures that most of us think we’re above committing. If feedback of this sort from test groups is implemented, it’s critical that a new test group is run through the newest design iteration before it goes live, to avoid the possibility that these metacognitive failures lead to the exclusion of helpful features and accidentally lead to a catastrophic usability flaw.
Among metacognitive failures there are three “illusions of visual bandwidth” [16]: overestimations of breadth, countenance, and depth. The overestimate of breadth is the misguided assumption that viewers can simultaneously observe all of the details of a scene. Overestimating countenance is the mistaken belief that observers will attend to more of the screen than they do, thinking that a person viewing a display will look at all of its elements, for example. Finally, the overestimate of depth refers to the assumption that attention to an object yields a detailed and robust encoding of it. Awareness of these common errors might support interface and game design by providing strategically redundant sources of information for particularly important events, or invoking a parsimonious strategy in deciding what to include in an interface. Further, arming a design team with this list of illusions of visual bandwidth might help them weight the feedback provided by test groups to avoid unprincipled or reactionary design decisions on later iterations of an interface.
3 Design-Based Evidence (DE) Collecting
Basic research based upon video game data is the best current example of design-based evidence collecting. This is the case in our own lab, where records of StarCraft 2 actions are used to inform us about cognition [3]. StarCraft 2 is a video game, and so the environment with which users interact is a designed one. What makes StarCraft 2 a good example case for design-based evidence collection is that the designers of the game record data from users that can be effectively leveraged for science. While the primary of goal of the designers was probably not to support science, the incidental design decisions the StarCraft 2 team made opened up opportunities for research projects with the resulting game data: the design decisions supported evidence collection.
In game design, challenges are purposely introduced to make specific subsets of the experience harder; in StarCraft 2, for example, players cannot see the whole game environment at once. As researchers who are interested in studying information access patterns, we are able to use the player behaviour in response to that game design decision to tell us about how people access information in dynamic environments [4, 5]. If designers are sensitive to the goals of researchers, they can support information gathering in natural environments by recording valuable data from users interacting with their design.
One important goal of human-computer interaction design is to reduce unnecessary friction, so the user may easily focus on their task. In basic laboratory tasks of human cognition, our goal is similar: we try to manipulate some quality of the environment so that the only difference between groups is the quality we changed. Should we find a difference between the groups, we can then attribute it to the manipulation and infer that the manipulation caused the difference. If, however, the task is unnecessarily complicated, it requires additional cognitive processing to simply interact with the experiment the evidence we collect in response to that experiment is influenced, in part, by the intuitiveness of the experiment interface and so some of the differences in participants’ behaviour is a function of the interface rather than the manipulation. This stymies the scientists’ understanding of cognition. All of this is to say that the human-computer interaction designs that successfully reduce the friction between the users and the computer get closer to the users’ genuine cognition, and provide a cleaner look at natural behaviour. Should these designers choose to record data from their users and share records of user behaviour with scientists, they would contribute invaluable data to understanding cognition. In doing so, everyone involved gains insight into the natural behaviours of people and basic scientists are able to use recorded information to better understand cognition.
In addition to the quality of data arising from expertly designed interfaces, there is an issue simply of quantity. Most studies in psychology include about 25 people per experimental condition. Expertly designed websites, apps, games and other interfaces invite orders of magnitude more users than a typical psychology study. The sheer quantity of data, above and beyond its potentially superior quality, offers more room for research insights. User experience researchers conducting A/B or MVT (multivariate) tests to quantify the efficacy of design choices can also be testing a critical hypothesis for the foundations of human cognition without knowing it. History has shown before that artists have insights about the existence of phenomena well ahead of scientists that aim to explain them. Sharing information between designers and researchers will serve to help scientists keep up and symbiotically move our understanding of human cognition and effective human computer interaction forward.
4 Evidence-Based Design Thinking (ED)
One way to apply findings from basic science to improve interfaces is to use data collected by users of similar interfaces, have scientists build models of their behaviour, and use the resulting models to predict how people will perform in the interface of interest. Borji, Lennartz, and Pomplun recorded people playing video games, and created a series of models to predict their attention patterns and behaviours during driving game play [17]. The authors were able to predict where people would look depending on the state of the game they were in and general properties of the play environment. Improvements to these types of models on the scientific end can save time and research energy on the industry end. Scientific models can also help to inform designers where to put valuable information in a display, so as to improve the usability of the interface.
Some task interfaces have fostered masterful performance in their users. Perhaps the most consistent inter-generational one is the car. Dashboards, keys, and practically everything under the hood changes often, but the actual practice of driving, at some degree of abstraction, is not terribly different today than it was nearly a century ago. Novice drivers display different eye movement patterns than experienced drivers [18]. Novices look ahead of the car more often, and make fewer lateral eye movements. This is attributed to the propensity of expert drivers to scan their environment for potential hazards more often. Typifying eye movement patterns to discern the level of expertise of the user is a strategy that astute designers may consider using in developing their own interfaces. This would allow us to employ scientific principles in interface design.
Front facing cameras on personal laptops are nearly good enough to act as coarse eye trackers, should a good algorithm to support real-time data cleaning and eye position relative to viewing angle and head distance be made available. Assuming the inevitable capacity to use built-in hardware to measure eye position, and a corpus of sample eye movement patterns from people at different level of skills with a software program, the interface could unfold features as the user displays behavioural and attentional patterns consistent with the next level of task mastery or the inferred goals of the user. Making such interface adjustments in service of a goal and in response to data is evidence-based design thinking.
User centered design can be better informed by researchers focused on human behaviour, performance and experience. Cognitive science is just this. In includes people like social psychologists, behavioural economists, and marketing researchers who are effective for performing early market research. Qualitative research is experiencing a resurgence in the social sciences, and is an important tool in getting the most information out of low-fidelity, sketch-based testing. It is a way to get a fuller sense of a real user and to rely less on contrived personas that may oversimplify the design problem space [19]. Cognitive scientists are poised to effectively test computer mediated behaviour. Generations of cognitive scientists have done just that, but perhaps with less of a mind to generalizability than the user experience tester may require training to be sensitive to. Designers who support cognitive researchers will benefit from this investment long term as researchers learn more about behaviour outside of traditional laboratory tasks.
5 The DEED Model Stages
The DEED (design-based evidence collection & evidence-based design thinking) model serves research and design alike. By opportunistically gathering data from well-designed interfaces, research can gather insight into human cognition. By opportunistically applying research findings, designers can make informed design decisions. In the DEED model, these specialists can more directly support each other’s work by sharing in the main goal of designing a good interface. In DEED, science and design can serve each other as a natural consequence of the stages and their processes.
The DEED model encourages the researcher to act during specific stages in an adapted agile design strategy. This balances the value added by a researcher with the chance that the research impedes design innovation and brainstorming, which has been a concern for some designers. Additionally, having explicit stages dedicated to research makes it easier for all members of the design team to support or query the researcher(s) about concerns or unknowns. If a designer wishes to try something outlandish, the researcher can help to determine whether or not it works after the designer has had a chance to implement it. Having this safe organizational structure within which to take risks has the added benefit of supporting innovation. A designer who may have previously opted not to try something unconventional for fear that it would not work can more safely explore that idea in this DEED framework, since the research stages will catch design decisions that interrupt usability.
The stages of the DEED model feed forward more than backward, but a researcher might have suggestions for which user behaviours to record to make the best possible test case. For design-based evidence collection to work effectively, it’s important that the researcher is able to make requests for data, so long as those requests for data do not impede the design decisions. Rapid prototyping is supported by this model structure and including research as part of the rapid prototype stage offers an opportunity for designers and researchers to collaborate and communicate over quick iterations to foster more effective collaboration [7].
Generally, social psychologists, behavioural economists, and marketing researchers are more effective in early qualitative research stages as they are best suited to extract the most information out of low-fidelity, sketch-based testing paradigms. In later stages, cognitive scientists are poised to effectively test quantitative, computer mediated behaviour. At any research stage, fundamental concerns can be illuminated and the design process can revert one step to address them.
Market Research
What is it? Market research is the process of finding a need to meet. In the very simplest (and perhaps most powerful) cases, finding a need of one’s own that is not met by product and services acts as market research. Market research will inform the design team about what success looks like. The team can use information from this stage and their prior experience to establish the qualities of the minimally viable product before moving on through the DEED model.
Who does it? In smaller enterprises, market research is conducted by individuals who identify a need to be met. In larger enterprises, market research can be a more formal pursuit wherein social psychologists, economists, business and marketing practitioners, and other social scientists identify trends, gaps and qualities of demographics that a potential product or service could benefit.
Why do it? Human computer interaction is effectively informed by the human’s experience of a product or service. Market research introduces the human into the equation. Without market research, the audience is underspecified and any work conducted on a product might miss an important group of people that otherwise might benefit from the development process. The results of market research can suggest whether the idea is pursued at all. If the need is minimal or insufficient, knowing that and deciding not to pursue the idea saves a lot of time and energy that would otherwise be spent on a great product for which no users exist. The better a potential user is understood, the greater the probability of success.
Content
What is it? Content is what your product or service is about. There are as many answers to “what is content?” as there are ideas for apps, games, and websites, but it includes the information that’s necessary to convey to users to get the main idea. For example, a website: the content would include the working title, the main pages, the ideas for headings, the general navigational structure, and the copy for a sample post. Some ideas about the information architecture should also be part of content generation.
Who does it? The content will traditionally be most closely associated with the person who motivated the project in the first place. However, content can be generated in larger organizations by having copy-writers, developers and designers all working together to create.
Why do it? Most human-computer interaction involves transferring information. Without having content by which to start design iterations, the human-computer interaction design problem is fundamentally lacking. Content, in a lot of ways, is the why of good design while the rest of the DEED model describes the how of good design.
5.1 The Collaborative Gear
Sketch
What is it? A sketch can be as informal as a pen and paper drawing, outlining how the content from the previous stage would be presented to a user. Sketches are low-fidelity prototypes, on paper or in software, that represent the structure and the general presentation of content.
Who does it? A sketch can be done by anyone with access to pen and paper, and earlier sketches will probably arise as a natural consequence of content generation. It is important to make the sketch an explicit stage, though, lest it not be natural for a particular person or team to draw out their ideas. While it’s nice to have someone with drawing talent perform a sketch, anyone who is able to approximate shapes can perform this step.
Why do it? Without a sketch, knowing how content is organized and presented can be very difficult. Sketches reduce sources of potential error between collaborators in content generation, because team members can see it start to take shape and recognize assumptions they were making that hadn’t come up in conversation during content generation. Additional iterations of the design process result in additional sketches demonstrating possible design solutions in response to feedback from previous stages. Subsequent stages build upon improvements reflected in the sketch.
Qualitative Research
What is it? Qualitative research is a relatively unconstrained method in which data is collected from people. It includes things like open ended questions or talk-aloud verbal protocol procedures.
Who does it? Qualitative methodology is enjoying a resurgence of interest in the social sciences. Many anthropologists, psychologists, historians, human resource specialists and people with similar training are effective qualitative researchers. In recent years, students out of user experience design programs are encouraged to learn and use qualitative research methods to explore their design practices.
Why do it? Qualitative research offers a good opportunity to “dive deep” into a user’s experience. Open-ended questions and talk-aloud protocols invite opportunities to discover experiences that arise in response to design decisions that were unintended. Placing a search box at the bottom of the website page might invite anger in response, for example, and it might be difficult to know to ask that question unless you ask a research participant to talk through their experience while navigating the low-fidelity prototype developed in the previous stage.
In later iterations, especially for larger firms with more resources, qualitative research can add value in knowing how different audiences might respond to an existing product. For example, scaling social media to communities with different cultural norms might require tweaks to the user experience that were not indicated by experience of the original audience. These types of opportunities for innovation arise in response to qualitative research like ethnographic studies, a new set of talk-aloud procedures and interviews.
Wireframe
What is it? A wireframe improves upon a sketch, or revisits design decisions that didn’t fare well in previous iterations. While a sketch is low-fidelity prototype, a wireframe is a higher-fidelity step toward a working prototype and considers properties that are necessary for implementation. For the website example, the wireframe invites the use of a grid to plan where images and content will be placed. Wireframes are typically digitized, and represent site components through boxes and grids (at least for two-dimensional interfaces).
Who does it? Wireframes are well within the wheelhouse of user experience designers, business analysts and user interface specialists. It is wise to have a wireframe upon which a developer can start to work or inform their plan, and so someone with some experience or understanding of the subsequent stages in development is a good person to have on wireframing.
Why do it? Wireframes clarify the structure for navigating between different content elements, work to meet one level of the users’ needs (namely - the structural interaction needs), and to plan the interaction design generally.
Visual Design
What is it? Visual design is all of the work that bridges a wireframe to a working prototype. If the wireframe was lower fidelity, then visual design will be the process of folding in content, aesthetic decisions, considerations for accessibility, etc. The visual design step will include everything that creates the “look and feel” of the product or service. User interfaces will be fleshed out in this stage, and graphic design decisions are firmed up.
Who does it? Graphic designers, interface designers, user experience designers, branding specialists can all be part of this process.
Why do it? Products and services developed with the end user in mind can make the interface simpler to use and increase the probability of repeated users, or loyal customers as the case may be. Ideas that are well-thought out, but poorly implemented can be abandoned prematurely if the design does not accurately capture the value added by the content.
Prototype
What is it? A prototype is a sample of the product. It can be an early draft of what the team thinks the final version will be, or it can be snippet of the overall experience. The prototype accurately represents the type of content conveyed by the product, how that content is accessed, and the navigation between elements.
Who does it? Prototypes are typically finalized by developers, but may be developed in concert with designers and engineers.
Why do it? The prototype is the product. Without a prototype, the team has nothing. The better question to ask is, “why can’t I stop here?” which is a crucial concern for design thinking generally. How can a team know when their design is ready to ship? The (admittedly underwhelming) answer is that it depends on what the expectations of the users and the design team are. If a product is distributed as an early alpha test version, the expectations might be lower, and so the first couple of prototypes might actually go live to a select group of people. However, for most design problems, the first prototype will not be the final one. One way to determine when iterations stop is to have a critical quantitative research question (e.g. time to gather information from the website your team is developing). When that value reaches a desired threshold, the product might go live. Better still, there may be multiple measures in addition to a team consensus, and until all measure are met and the team agrees the product is ready to ship, the product goes through more iterations of the Collaborative Gear.
Quantitative Research
What is it? Quantitative research is studying performance indicators or research questions in a way that can be numerated. Counting successful attempts at solving a problem with the product is an example of quantifiable research. Larger firms with more resources might choose to include more advanced metrics like eye tracking, mouse tracking, response time analysis, survey data, etc. to quantify how effective the product is at meeting the need of the users in the most user-friendly way possible. In each round of testing, the researcher’s responsibility is to ensure that the data required to perform their scholarly and their design research is available. The test is both a test of the product and of the process.
Who does it? Quantitative research is traditionally conducted by people with some statistical training and some programming knowledge. People from more technical disciplines in the social sciences, engineers, data scientists and engineers are all among those who would be effective quantitative researchers.
Why do it? Qualitative research is great for diving deep into a few people’s experiences, while quantitative research is good for getting a general sense of many people’s experiences. Quantitative research can be a deep-diving pursuit, too, though: having people come in to the lab and performing careful observation of their experience with the prototype can offer insight that users may not even be aware of. For example, eye tracking data can provide an index of arousal (or stress) that might not be brought up by the users themselves. Additionally, if a user consents to have data collected while they engage with the product under development, the researchers can query particular parts of the interaction process without explicitly asking the participant about their experience, thereby avoiding expectancy effects that might come up through qualitative research. Using both approaches, then, offers the most robust understanding of the user’s experience, and thereby assesses the extent to which the product does its job effectively.
Final Prototype
What is it? The final prototype is the version that reaches the threshold established by the design team or the product manager. It’s version that the end user sees.
Who does it? The final prototype is the collective output of everyone on the design team, though a developer is likely to be the last person to touch it.
Why do it? Until the final prototype is released, the product is an idea. When the final prototype ships, the design team might keep an eye on user reviews and responses and begin the DEED model again with their eyes on the next version of the product. In that next iteration, then, the market research is largely provided by the responses of people after engaging with the final prototype of the first product.
One Good DEED Deserves Another
Organizationally, a project manager can set a threshold upon which the development cycle is closed. This is effectively set after the quantitative research test, where, should the results of user testing suggest below-threshold differences to the earlier sketch, the cycle advances for the final time, and user experience testing acts as checks and balances. We suggest that the quantitative research stage is a good place to end iterations of the Collaborative Gear. While some products or services are better evaluated qualitatively, many will be suited for some quantifiable metric of success. If the results of the research suggest that the design is sufficiently close to the metrics defining desirability, feasibility and viability then the next step is to prepare the final prototype.
In DEED, the design team should talk about what success looks like and establish the qualities of the minimally viable product prior to the first iteration of the Collaborative Gear (i.e. at the close of market research). The researchers can then discuss how to assess success, in a manner informed by everyone involved in design. A project manager might decide that the product or service will never be fully completed, and set a threshold of minimal change that would trigger release of the final prototype. Naturally, if the iterations are showing no improvement, the team will be having conversations about whether the design is ready to go or ready to be abandoned. Having set ahead of time the minimum number of changes per prototype supports (a) rapid development and (b) efficient use of resources by avoiding endless cycles of the same design. Efficiency and agility, in this regard, are informed by both designers and researchers. The DEED design process can be made leaner by constraining the maximum number of cycle through the process, or it can be made more research-intensive by setting more sensitive advancement threshold.
The start of the DEED model might be the end of an earlier iteration through the full process. Large firms, like established game design companies or big tech departments, might iterate multiple versions of a product. The DEED model can be chained (DEED model 1 for version 1.1; DEED model 2 for version 1.2, etc.) by setting interactive goals during design and development. In these environments, the output of the whole design process naturally leads to the input of the next version. As soon as a website goes live by achieving sufficient success to close out the first iteration of the DEED model, for example, the team can begin brainstorming improvements based on what is working well, and how they might want to expand it to offer different solutions to different problems.
This design model can work for teams of any size that have interest in both extending knowledge and improving design. A single person could feasibly play the role of market researcher, qualitative researcher, and wireframer, but naturally, people who specialize in each of these roles are desired if resources allow. Keeping these stages separate encourages some autonomy and exploration within each of the stages. Some room for independent innovation is important, and helps to assuage concerns of “groupthink” arising [26] from having every team member entirely invested in every step of the process. Keeping ideas from teammates hinders progress, however, so the DEED model does keep the output of these stages connected, and encourages effective communication between team members of the connected parts of the DEED process.
In DEED, the design thinking process is a way to manifest design thinking generally. We think it’s useful to disambiguate design thinking as a mindset versus the design thinking processes as a set of steps. While design thinking is a solution-based approach to a problem, a design-thinking process is a set of steps required to get to that solution. The DEED model is an example of such a process. In addition to being helpful in its own right, we hope that the DEED model sets a precedence of approaching of design thinking and its implementation as separable ideas.
6 Design Thinking
Before email really caught on as a communication tool, there was a time where new mail notifications were going unnoticed and messages were left unread. At the time, the notification was a prominent arrow on top of inbox to indicate a new message. In the original design of this email software, it was generally underused and users insisted that emails they had received had not been available until they were pointed out by support staff, despite having been sent days in advance. A clever designer implemented a “You’ve got mail” component to the notification, which assuaged concerns about mail going unnoticed [16]. The now antiquated “You’ve got mail” notification appears to be polarizing, garnering some retro appreciation wherein people today have it set as their phones’ email notifier while others start conversation threads on forums about how awful it is to overhear it. To not have included that message, though, might have meant that email remained ineffective as a communication tool; a tool that’d be missed by the people sending and receiving 196,400,000,000 emails [20] each day. This innovation is an example of design thinking in action.
Design thinking is a phrase that seems to be used more often than it is explained. Like many community-based phrases, the people who use design thinking know what they mean by it, but it’s difficult to explain to people outside of design circles. In an attempt to lower the barrier to entry so more people can employ it, we have provided an exposition of what design thinking is. Perhaps more critically, in this section we disambiguate design thinking from its implementation: the design thinking process. The DEED model is an example of a design thinking process. While design thinking is more of a framework, or a way to approach problems, the design thinking process is a set of steps to apply that way of thinking. Put simply: design thinking is the why, and the design thinking process is the how.
Design thinking is difficult for formally define. Mootee makes clear the issue of defining it: “[t]here is no single, unifying, common definition of design thinking” [21]. Many subscribers of design thinking define it as a mentality toward problem solving, while others see design thinking as a “toolbox” to be used in organizational settings to improve the collaborative process [22]. As one report on the matter points out, both sets of language are used by different organizations to define design thinking as both a toolkit, and as a mental trait [23]. Among their sample, there is a tension between those who see design thinking as a descriptive element whereby practitioners would identify as design thinkers, while others would view design thinking in prescriptive terms as something to be used by a group of collaborators. This tension results in a vagueness of terminology which risks “Design Thinking” turning into a buzzword, rather than as a serious concept to implement in business models.
Both the prescriptive and descriptive ideas of design thinking have merit, but would benefit from being differentiated. We distinguish between design thinking (descriptive, and as a mindset), and the design thinking process (prescriptive, and as a set of steps to find solutions). Design thinking, is the mindset of approaching problems openly, generally, creatively and considers the genuine use case of the product under development. A design thinker then keeps the three constraints of desirability, feasibility, and viability [24] in mind as they work through the brainstorming process. When it comes to the actual application of the design thinking process, steps are taken by a team or individual to find desirable, feasible and viable solutions. This often means iterating through prototypes to maximize all three. In this sense, a practitioner of the design thinking process can assess their design against measures of desirability, feasibility and viability. With this separation of mindset (design thinking) and practice (the design thinking process), much of the uncertainty associated with design thinking is reduced.
The DEED model presents a case example of how the design thinking process can be applied, while also helping to collect valuable information.
7 Conclusion
The symbiotic relationship between science and design has been noted before [1]. Making the roles of both scientists and designers necessary components of a design process solidifies the relationship between them, and supports improvements in both product design and general knowledge. Larger firms have more opportunity to support each stage, while smaller ones might have to simplify the process; but allowing for a step in the design where someone asks themselves what their results tell them about their understanding of the world generally — beyond the product and beyond the market — gives them an opportunity to (a) advance human knowledge, (b) consider the longer term impact of their work, and (c) consider related gaps that their product might fill.
The DEED model is one strategy to implement the “applied and basic combined” research approach [1], and comes at a critical time. Science is struggling with a rising “post truth” rhetoric and socio-political hurdle at the same time where a record number of Ph.Ds are graduating to find a small academic job pool [25]. Design, however, is enjoying a surge in popularity, and innovation is among the most valued qualities in businesses. While opportunities to work in basic science may be shrinking, applied science and design benefit from that research, and so this model offers a method by which information can be collected by basic scientists for immediate application and general theorizing.
The DEED model is a design thinking process that offers a balance between including research and evidence in the design process while ensuring that designers have space and support to innovate. The DEED model encourages the researcher to act during specific stages in an adapted agile design strategy. This balances the value added by a researcher with the chance that the research impedes design innovation and brainstorming, which has been a concern for some designers. It immediately supports the design of a single product/service by virtue of its agile design properties; it supports the growing body of knowledge in an organization or community; and it supports the understanding of the natural world by contributing to scientific findings. It is an alliance of people, methods, and complementary goals that, when combined, benefit the product, the organization, and the society as a whole.
Notes
- 1.
It should be noted that there are limitations in this massive replication attempt, and the failure to replicate may not be quite as dire as it was stated in the original report. Psychology does, nonetheless, have some serious work to do to improve the robustness of our findings.
References
Shneiderman, B.: The new ABCs of Research: Achieving Breakthrough Collaborations. Oxford University Press, Oxford (2016)
Cross, N.: Designerly ways of knowing: design discipline versus design science. Des. Issues 17, 49–55 (2001)
Thompson, J., Blair, M., Chen, L., Henrey, A.: Video game telemetry as a critical tool in the study of complex skill learning. PLoS ONE 8, e75129 (2013)
Thompson, J., Blair, M., Henrey, A.: Over the hill at 24: persistent age-related cognitive-motor decline in reaction times in an ecologically valid video game task begins in early adulthood. PLoS ONE 9, e94215 (2014)
Thompson, J., McColeman, C., Stepanova, K., Blair, M.: Using video game telemetry data to research motor chunking, action latencies, and complex cognitive-motor skill learning. Top. Cogn. Sci. (2017). http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1756-8765/earlyview
Mayer, L., Rauch, S., Daukaeva, K., Henwood, L.: Agile design. Lecture from RED Academy, Vancouver (2015)
Carlgren, L., Elmquist, M., Rauth, I.: Exploring the use of design thinking in large organizations: towards a research agenda. Swed. Des. Res. J. 1, 47–56 (2014)
Nielsen, J.: Why you only need to test with 5 users. www.nngroup.com, https://www.nngroup.com/articles/why-you-only-need-to-test-with-5-users/
Op-Art.: Op Art History Part I: A History of Perspective in Art. www.op-art.co.uk, http://www.op-art.co.uk/history/perspective/
Enright, J.: Art and the oculomotor system: perspective illustrations evoke vergence changes. Perception 16, 731–746 (1987)
Schaller, R.: Moore’s law: past, present and future. IEEE Spectr. 34, 52–59 (1997)
Open Science Collaboration: Estimating the reproducibility of psychological science. Science 349, acc4716 (2015)
Kingstone, A., Smilek, D., Ristic, J., Friesen, C., Eastwood, J.: Attention, researchers! It is time to take a look at the real world. Curr. Dir. Psychol. 12, 176–180 (2003)
Colonius, H., Diederich, A.: Multisensory interaction in saccadic reaction time: a time-window-of-integration model. J. Cogn. Neurosci. 16, 1000–1009 (2004)
Levin, D., Momen, N., Drivdahl, S., Simons, D.: Change blindness blindness: the metacognitive error of overestimating change-detection ability. Vis. Cogn. 7, 397–412 (2000)
Varakin, D., Levin, D., Fidler, R.: Unseen and unaware: implications of recent research on failures of visual awareness for human-computer interface design. Hum. Comput. 19, 389–422 (2004)
Borji, A., Lennartz, A., Pomplun, M.: What do eyes reveal about the mind?: algorithmic inference of search targets from fixations. Neurocomputing 149, 788–799 (2015)
Underwood, G., Chapman, P., Brocklehurst, N., Underwood, J., Crundall, D.: Visual attention while driving: sequences of eye fixations made by experienced and novice drivers. Ergonomics 46, 629–646 (2003)
Peterson, M.: The problem with personas. blog.prototypr.io. https://blog.prototypr.io/the-problem-with-personas-82eb57802114#.yum1nufsg
Radicati, S.: Email statistics report, 2013–2017. http://www.radicati.com/wp/wp-content/uploads/2013/04/Email-Statistics-Report-2013-2017-Executive-Summary.pdfl
Mootee, I.: Design Thinking for Strategic Innovation: What they can’t Teach You at Business or Design School. Wiley, Hoboken (2013)
Johansson-Sköldberg, U., Woodilla, J., Çetinkaya, M.: Design thinking: past, present and possible futures. Creativity Innov. Manag. 22, 121–146 (2013)
Schmiedgen, J., Rhinov, H., Koppen, E., Meinel, C.: Parts without a whole? The current state of design thinking in organizations. Study Report 97 from Hasso-Plattner Institute for Technical Software Systems at Potsdam University (2015). ISBN: 978-3-86956-334-3
Brown, T.: Change by Design: How Thinking Transforms Organization and Inspires Innovation. HarperBusiness, New York (2009)
Schillebeeckx, M., Maricque, B., Lewis, C.: The missing piece to changing the university culture. Nat. Biotechnol. 31, 938 (2013)
Nemeth, C., Nemeth-Brown, B: Better than individuals. In: Group Creativity: Innovation Through Collaboration, pp. 63–84 (2003)
Doody, S., Bird, A., Ross, D.: The effet of auditory and visual models on acquisition of a timing task. Hum. Mov. Sci. 4, 271–281 (1985)
Acknowledgements
We would like to thank members of the Cognitive Science Laboratory who offer their time and talents to developing projects. Thanks especially to Steve DiPaola, and Thomas Spalek, whose questions and guidance encouraged the development of this material.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
McColeman, C., Barrett, R., Blair, M. (2017). Design-Based Evidence Collection and Evidence-Based Design (DEED) Model. In: Marcus, A., Wang, W. (eds) Design, User Experience, and Usability: Theory, Methodology, and Management. DUXU 2017. Lecture Notes in Computer Science(), vol 10288. Springer, Cham. https://doi.org/10.1007/978-3-319-58634-2_11
Download citation
DOI: https://doi.org/10.1007/978-3-319-58634-2_11
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-58633-5
Online ISBN: 978-3-319-58634-2
eBook Packages: Computer ScienceComputer Science (R0)