Abstract
Heuristic evaluation (HE) is one of the most commonly used usability evaluation methods. In HE, 3–5 evaluators evaluate a certain system guided by a list of usability heuristics with the goal of detecting usability issues. Although HE is popular in the usability field, it heavily depends on the expertise of the evaluator, meaning that there is a large gap in HE performance between expert evaluators and novice evaluators. One of the factors contributing to this gap is the difficulty of thoroughly understanding usability heuristics. Usability heuristics are abstract and require simplification to be better understood by novice evaluators. In this work, our goal was to simplify Nielsen’s heuristics to make them easier to understand since they are one of the most popular sets of usability heuristics. We interviewed 15 usability experts with at least 4 years of experience in the field from both academia and industry and asked them to explain Nielsen’s heuristics in detail. We analyzed their responses, and we produced a modified list that is more detailed than Nielsen’s original list.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
Usability is a central value in human-computer interaction (HCI) and user experience (UX). One of the main objectives of HCI/UX practitioners is to ensure that the system being evaluated is usable. One of the most well-known definitions of usability is that provided by Nielsen [1]. Nielsen’s definition stated that usability is comprised of the following five components:
-
Learnability: “How easy is it for users to accomplish basic tasks the first time they encounter the design?”
-
Efficiency: “Once users have learned the design, how quickly can they perform tasks?”
-
Memorability: “When users return to the design after a period of not using it, how easily can they reestablish proficiency?”
-
Errors: “How many errors do users make, how severe are these errors, and how easily can they recover from the errors?”
-
Satisfaction: “How pleasant is it to use the design?”
Based on this definition, for any system to be considered usable, it has to be learnable, memorable, be efficient, handle errors, and be satisfying. To measure the usability of a system, many methods have been developed and introduced. One of the most commonly used methods is heuristic evaluation (HE) [2, 3]. The idea behind HE is simple. In HE, a number of usability practitioners, ideally 3–5 [4], evaluate a system guided by usability heuristics. HE gained popularity because it is a discount method [5], meaning that it does not require much time, money, or resources. However, the level of expertise of the evaluator has an effect on the results of the evaluation. Originally, Nielsen stated that using HE, 3–5 usability experts can identify between 74% to 87% of the usability issues of a system, while the same number of novice evaluators can identify 51% of the usability issues [6]. However, a subsequent study showed that novices can detect only 23% of usability issues [7]. Therefore, the gap is even larger than what Nielsen expected. We investigated the reasons behind this discrepancy [8] and found that one of the main reasons is novices’ lack of understanding of heuristics and the role that heuristics play in the detection of usability issues. This finding matches the results of other researchers. For example, [9] found that usability heuristics themselves are not usable due to their level of abstraction. Moreover, [10] reported that novice evaluators have difficulties understanding some usability heuristics, cannot differentiate between some heuristics, and think that some heuristics are the same. In [11], a questionnaire was designed to assess novice evaluators’ perception of usability heuristics; the results showed that novice evaluators need more complete heuristics’ specifications. Understanding anything is the first step to applying it. Therefore, without a proper understanding of usability heuristics, the gap between usability experts and novices will persist. Improving the performance of novice evaluators is important. Usability experts are not always available, either because it is difficult to find them or because they are expensive to hire them. Even in research studies, novices are recruited to perform HE [12, 13]. Small companies usually hire novices, as shown in a Brazilian study [14]. The 51% identification rate that Nielsen proposed can be achieved when 3–5 novices perform HE; however, some companies do not hire more than one usability practitioner, as shown by a Malaysian survey [15]. Therefore, it is important to start working to improve the performance of novices and provide them with more accessible and usable methods. In our work [8], we created a step-by-step protocol for HE. We stated that usability heuristics are too abstract and need to be further simplified to be easily understood by novices. In this study, we complemented that work by simplifying usability heuristics. We used a snowball sampling technique to recruit usability experts. We asked 15 usability experts with at least 4 years of experience from both academia and industry to thoroughly explain each of Nielsen’s 10 usability heuristics, i.e., explain them; give examples; and describe their significance, their applicability and the consequences of ignoring them. After transcribing all the responses, we conducted thematic analysis to synthesize the responses and group all the concepts identified under each heuristic.
2 Related Work
HE as a method originally developed by Nielsen and Molish in the 1990s [16, 17], when they proposed a list of nine usability heuristics. This list was later revised by Nielsen to result in a list of ten usability heuristics [18]. This set of usability heuristics is considered to be one of the most famous in the field. The 10 heuristics are as follows:
-
Visibility of system status: “The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.”
-
Match between system and the real world: “The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.”
-
User control and freedom: “Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.”
-
Consistency and standards: “Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.”
-
Error prevention: “Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.”
-
Recognition rather than recall: “Minimize the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.”
-
Flexibility and efficiency of use: “Accelerators — unseen by the novice user — may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.”
-
Aesthetic and minimalist design: “Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.”
-
Help users recognize, diagnose, and recover from errors: “Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.”
-
Help and documentation: “Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.”
Although Nilesen’s heuristics are the most well-known heuristics in the field, there are other heuristics that preceded them and others that succeeded them. These heuristics include Shnideramn’s 8 golden rules [19], which are 8 general principles for designing interfaces; Tognazzini’s 19 interaction design principles [20]; Gerhardt‐Powal’s 10 cognitive principles [21], and Mandle’s 3 golden rules [22]. These are just examples of the many other usability heuristics that have been produced over the years to aid usability practitioners in evaluating systems.
Since there are many usability heuristics that have been developed, some researchers prefer to combine some of the existing heuristics instead of creating new heuristics from scratch. In [23], the researchers devised a new list of 11 principles called the Multiple Heuristic Evaluation Table (MHET) based on multiple existing heuristics, such as Nielsen’s heuristics and Shniderman’s principles. In [24], the authors attempted a similar task; they created a new set of 15 heuristics based on Nielsen’s heuristics and Tognazzini’s principles.
Another approach to address the abundance of available usability heuristics is to compare them. In [25, 26], Nielsen’s heuristics were compared with Gerhardt-Powals principles. Solh [25] showed that Gerhardt-Powal’s principles are better than Nielsen’s heuristics in detecting real problems, but Nielsen’s heuristics were better in detecting severe problems, while Hvannberg et al. [26] reported no significant difference between the two.
All the previously mentioned heuristics are general heuristics, meaning they were not designed to target specific types of platforms, audiences or contexts. In contrast, some researchers have focused on producing domain-specific heuristics. The researchers in [27] developed a set of 43 heuristics to assess the playability of video games, which they called the Heuristics of Evaluating Playability (HEP). In [28], the researchers produced a list of 11 heuristics to evaluate the usability of touchscreen-based mobile devices. In [12, 29], the researchers developed heuristics targeted at specific audiences. The researchers in [29] focused on children and created a comprehensive list of heuristics to evaluate child e-learning applications, which they called the Heuristic Evaluation for Child E-learning Application (HECE), while in [12], the researchers focused on elderly people. These researchers produced a list of 13 heuristics to evaluate the usability of mobile launchers for elderly people.
This is by no means an exhaustive list of the domain-specific heuristics but rather some examples of what has been produced.
Some studies have aimed to facilitate the use of HE for novice evaluators. This trend was motivated by [7], which showed that novices’ performance was far worse than that originally stated by Nielsen. Subsequently, many studies were conducted with the goal of enhancing the performance of novice evaluators. In [30], the researchers investigated the tactics that usability experts use when applying HE; they interviewed 4 usability experts and produced a list of 38 tactics that experts use. The goal was to help novices conduct HE by using the same tactics that experts use. In [31], the researchers decided to include real users in the HE process and examine whether the performance of nonexpert evaluators could be enhanced. They developed two methods, namely the user exploration session (UES-HE) and user review session (URS-HE), and both methods proved to be effective in enhancing the performance of novice evaluators. Other studies have tried to examine whether children are able to perform HE. In [32], they recruited 20 children between 12–13 years of age, explained the heuristics to them and asked them to evaluate a game; however, the results of their evaluation were not satisfactory. In a similar study [33], the researchers recruited 12 children aged between 10–11; however, they presented the children with heuristics written in a simplified manner appropriate for their age and asked them to evaluate a music game. The results of the evaluation showed that children faced some difficulties, however, that showed a potential for good HE performance.
As seen in the literature, there is a growing interest in facilitating the use of HE for novices. Although there are a large number of usability heuristics, both general and domain-specific, Nielsen’s heuristics are still the most recognized and used heuristics. Therefore, instead of creating new usability heuristics, our goal in this study is to further enhance the understandability of Nielsen’s heuristics by breaking down each heuristic.
3 Methodology
The goal of this study was to explain Nielsen’s heuristics in detail. To do so, we decided to conduct interviews. Interviews give participants a chance to deeply explain their responses, which eventually helps the researchers obtain a better understanding of the matter at hand. We specifically chose to conduct semistructured interviews because, in contrast to structured interviews, they allow for some flexibility in asking additional questions when necessary, and in contrast to unstructured interviews, they still ensure that the specific points being investigated are addressed and prevent a loss of concentration.
Before conducting the interviews, we had to decide who we would interview. Obviously, the goal was to interview usability practitioners with some level of experience. However, defining who is an expert is difficult, not only in HCI but also in most fields to varying degrees. There is not much in the literature on defining who is an expert in HCI. To the best of our knowledge, the only work that has pursued this topic is [34]. In this work, the researchers defined an expert as someone with at least 10000 h of practice, as well as a Master’s or PhD in the field. However, this definition is rather strict since the researchers based them on the idea of deliberate practice [35]. Such a criterion of deliberate practice suggests that for anyone to become an expert in a subject matter, he/she should have 10000 h/10 years of deliberate practice. However, this criterion has been widely criticized for not being accurate. In [36], the researchers showed that practice is not the only factor to determine who becomes an expert. There are other factors, such as when the person started to practice and the person’s IQ. Therefore, one can become an expert with less than 10000 h of practice, and one might not become an expert even after 10000 h of practice. Since usability experts are difficult to find, and since this criterion is not necessarily appropriate, we decided to go with less strict criteria, such as those proposed in [30], in which an expert is defined as someone who has at least 4 years of experience in the field. In our study, along with this criterion, we stipulated that participants should be familiar with Nielsen’s heuristics and have performed HE at least three times.
Our next step was to decide how many interviews should be performed. Interviews should be conducted until saturation is reached. For this study, the recommended number of interviews based on [37] was 12.
To recruit participants, we used a snowball sampling technique. We ultimately interviewed 15 usability experts with at least 4 years of experience who came from both academia and industry. We included 7 experts from academia and 8 experts from industry. Nine interviews were conducted online via conference calls, and 6 interviews were performed in person. The main questions in the interviews asked participants to explain each of the heuristics; give examples; and describe their significance, their applicability and the consequences of ignoring them.
After the interviews, we analyzed the responses of the participants by using thematic analysis [38].
4 Results
There are two main issues with the heuristics. First, some heuristics are too abstract, as they encompass multiple ideas that need to be listed; one overly abstract heuristic is the visibility of the system status. Second, some heuristics have interrelated ideas but are not the same. Therefore, they should be explained separately; otherwise, evaluators might focus on one of them and not the other. One example of a heuristic characterized by this issue is help and documentation.
The detailed heuristics are as follows:
-
1.
Visibility of system status: The idea of this heuristic is to always keep the user informed. Under this heuristic, there are four subheuristics:
-
1.1.
State: Users should always know what the state of the system is and what they can do in the system at any given moment. For example, if there is a link in the page, it should appear in a different color and be underscored so that the user knows that he/she can click on it.
-
1.2.
Location: Users should always know where they are, including in which system they are, in which part of the system they are, and where they are in relation to other parts of the system. For example, the logo of the system on the top of the page lets users know in which system they are, the title of the page lets them know in which part of the system they are and the navigation bar lets them know where they are in relation to other parts of the system.
-
1.3.
Progress: Users should know how far they are from completing their task. This applies to both active and passive situations. Active situations occur when the user is completing a multistep task. For example, when a user is completing a multipage form, he/she should know how many pages have been completed and how many pages are left. Passive situations are when a user takes an action and waits for the system to complete it. For example, when the user downloads a file, the system shows him/her how long the file will take to download.
-
1.4.
Closure: Users should clearly know that their task is completed and whether it has been completed successfully. For example, when the user performs a financial transaction, he/she should know whether it went through.
-
1.1.
-
2.
Match between system and the real world: The idea of this heuristic is to provide users with content they understand and are familiar with. Under this heuristic, there are three subheuristics:
-
2.1.
Understandability: Users should be able to understand any content that is presented to them in the system. Content refers not only to text but generally anything presented in the system, such as pictures, icons, or metaphors. The understandability of the content depends heavily on the target audience; therefore, the target audience should be kept in mind when examining the understandability of the content.
-
2.2.
Natural and logical order: The content and the series of steps of an action in the system should follow a natural and logical order. A natural order refers to the order of steps that people usually follow when performing similar tasks in the real world. For example, in an e-shop, the steps to buy an item should be similar to the steps that users follow when they buy an item from a physical shop. However, not all tasks performed digitally have similar tasks in the real world; in such a case, the task should follow an intuitive and logical order as much as possible.
-
2.3.
Appropriateness: The content should not only be understandable but also be appropriate and acceptable. For example, if the system is expected to be used by children, then certain words or phrases should not be used. Alternatively, if the system is expected to be used by users from a certain culture, then content that might be perceived as offensive should not be displayed. Therefore, the appropriateness also depends on the target audience of the system.
-
2.1.
-
3.
User control and freedom: The idea of this heuristic is to increase users’ control over the system and their freedom. Under this heuristic, there are three subheuristics:
-
3.1.
Reversibility: Anything users do should be reversible. Users should be able to undo or redo any action they make in the system. For example, if the user deletes a certain file, he/she should be able to retrieve the deleted file if desired.
-
3.2.
Emergency exit: Users should be able to exit out of any undesirable situation in the system. If users are faced with a situation in which they do not know how to act or cannot find what they want, then there should be an easy way out of the situation. For example, on certain websites, there are continuous pop ups that the user does not know how to block, which is a violation of this heuristic.
-
3.3.
Informing users: Users should be informed about any action they have taken in the system. This is specifically crucial when the actions that the user takes are important or critical. For example, when the system asks the user to enter personal information, the system should explain to the user why he/she is being asked to enter this information and how the system is going to handle this information.
-
3.1.
-
4.
Consistency and standards: This heuristic has two interrelated ideas. These two ideas are as follows:
-
4.1.
Consistency: Once a certain element is used in one part of the system, it should be used in a similar way throughout the system. This consistency can take the following different forms: consistency in meaning, i.e., if one element has one meaning in one part of the system, it should have the same meaning in the whole system; consistency of function, i.e., if an element does one thing in one part of the system, it should do the same thing in the rest of the system; consistency of effort, i.e., if there is a multistep task, the effort should be divided equally between the steps so the user can know what to expect; consistency of organization, i.e., if one part is organized in a certain way, then the rest of the system should follow the same general organization; and consistency of feeling, i.e., there should be a consistent feeling throughout the system, and the system should be perceived as one unit.
-
4.2.
Standards: The design of the system should incorporate the knowledge that users have from their previous experiences with similar systems. Therefore, common practices and conventions should be applied to the system to make the user’s interaction with the system easier. For example, if most websites place the search bar at the top of the page and users are used to that placement, then the system should follow that and place the search bar at the top of the page.
-
4.1.
-
5.
Error prevention: The idea of this heuristic is that rather than waiting for users to make mistakes and then correct them, errors should be prevented in the first place. Under this heuristic, there are seven subheuristics:
-
5.1.
Instructions: When a task has a specific way to be performed, the user should be provided with instructions on how to perform it. For example, when the system asks the user to enter a username and password, there should be instructions next this request to inform the user about what the username and the password should and should not contain. Otherwise, the user might have to enter the username and passwords multiple times until he/she determines what the requirements are.
-
5.2.
Constraints: The system should not allow the user to enter certain inputs or use certain elements when the entry or use of those elements will inevitably produce erroneous/undesirable outcomes. For example, if the user is booking a flight, the user should not be allowed to enter a return flight date that precedes the date of the departure flight.
-
5.3.
Confirmation: Users can sometimes take actions in the system that are unintended. Therefore, the system should ask users to confirm the actions they have taken to ensure that the actions that are about to be implemented are intended by the user. However, the confirmation should not be asked for every action the user takes in the system; it should be asked when the action is not easily undone or when it has serious consequences. For example, if the user is about to send a very large amount of money to someone, the system should ask the user to confirm the transfer to ensure that the correct amount is being sent and that the right person is going to receive it.
-
5.4.
Notification: Changes and updates could occur in the system that could affect the outcomes and the performance of the user. Therefore, the user should be notified of any changes or updates to the system to avoid any undesirable outcomes. For example, if the user is using his/her phone, he/she might not notice that the battery is running low, so the system should notify him/her at a certain point that the battery is about to die to allow the user to take action. However, notifications should be used only when serious consequences are possible because continuous notifications could be annoying to users and could themselves have an adverse effect on the user’s performance.
-
5.5.
Autosaving: Various occurrences in the system can cause all the user inputs to be deleted or disappear. Therefore, the user’s inputs should be autosaved so the user can retrieve them if anything happens to the system. Autosaving is most needed when the inputs are time-consuming or critical. For example, on e-learning websites, if the user is writing an essay on the site, the site should autosave the user’s inputs so if anything happens, the effort put in by the user does not go to waste.
-
5.6.
Flexible inputs: Sometimes, there is only one way for users to enter inputs; however, when there are multiple forms for the input, the system should allow users to enter the input in any form that they know/feel comfortable with, which will decrease the chance of them entering it incorrectly. For example, when entering a date, some users are comfortable entering the name of the month, while others are comfortable entering the month as a number; the system should accept both forms.
-
5.7.
Defaults: In many systems, there are default states/modes that users start with. Defaults are critical because they determine the output and how the user interacts with the system. The user’s lack of familiarity with or knowledge of the default state might lead to erroneous outcomes. Therefore, the defaults of a system should be used carefully and should be selected based on what the users are familiar with. Moreover, the user should clearly know what the default state is. For example, if the default of a phone is to not ring when someone calls, then the user would miss calls, as users do not expect that state to be the default when they use any new phone.
-
5.1.
-
6.
Recognition rather than recall: The idea of this heuristic is that users should not rely on their memory but rather should be provided with aids to remember. Under this heuristic, there are two subheuristics:
-
6.1.
Availability: Anything users need to complete the task should be available in front of them. Therefore, the interface should clearly present everything that users need to accomplish their goal. Moreover, if users are completing a multistep task and there is a piece of information that they will need in more than one step, then this information should be presented not only at the first step but also at every step for which they need that information so they are not forced to remember it every time they need it. For example, when people go to a supermarket, there are signs on every aisle to tell them what every aisle contains so they do not need to remember what every aisle contains every time they go to the supermarket.
-
6.2.
Suggestions: Users do not always know exactly what they want when using the system, and they might partially know what they want. Therefore, the system should provide users with suggestions to help them access what they want. For example, the suggestions that Google gives when one starts to type in the search bar help the user access what he/she wants. Additionally, in e-shops, when the user is browsing a certain item, the system gives suggestions of items that are frequently purchased with the item the user is browsing, which might remind the user to purchase an item he/she wanted. However, the suggestions provided should be as accurate as possible because inaccurate suggestions might be annoying to users.
-
6.1.
-
7.
Flexibility and efficiency of use: This heuristic has two interrelated ideas:
-
7.1.
Flexibility: Most systems are used by multiple types of users and in different situations. Therefore, the system should be flexible enough to accommodate all different types of users and situations, which can be achieved through the provision of multiple ways to accomplish the same goal in the system, for example, providing a text reader to accommodate the user when he/she is driving or providing shortcuts for expert users.
-
7.2.
Efficiency: Any task performed in the system should be in its simplest form. Therefore, there should not be any extraneous or unnecessary steps involved in completing the task. Any task performed in the system should be examined to ensure that every step can be performed in the required way; if it cannot be performed in this way, then it should be removed to simplify the task. For example, if the user is signing up for a website and is asked to enter his/her phone number, but the phone number will not serve any purpose, then the request for the phone number should be removed because it is just going to require additional unnecessary effort on the part of the user.
-
7.1.
-
8.
Aesthetic and minimalist design: The idea of this heuristic is that the design of the system should be appealing and easy to navigate. There are three subheuristics under this heuristic:
-
8.1.
Aesthetic: The attractiveness and the beauty of the design is a crucial part of the system that might be overlooked. There is a principle called the esthetic-usability effect, which states that users perceive systems they find aesthetically pleasing to be more usable. Therefore, if the system is aesthetically pleasing, users will be more forgiving of any usability issues that the system has. However, users might be forgiving of minor usability issues but not major usability issues. Thus, the design of the system should also be examined in terms of how aesthetically pleasing it is.
-
8.2.
Organization: The system should be clearly organized. The user should not spend a large amount of time trying to understand the organization of the system. The items that are related to each other should be organized in a way that shows their relation. The different sections should be separated to make the navigation easy.
-
8.3.
Simplicity: The website should not present any extraneous information or elements. Only the elements that are necessary should be presented on the page. Extraneous content could make the system cluttered and crowded, which will affect the ability of the user to navigate it. Moreover, unnecessary content could divide users’ attention, so instead of focusing on what they want to accomplish, they might become distracted by other things.
-
8.1.
-
9.
Help users recognize, diagnose and recover from errors: This heuristic has three interrelated ideas:
-
9.1.
Recognizing errors: The first step in rectifying errors is knowing that an error has occurred in the first place. Therefore, the system should notify the user that something has gone wrong. The user should be able to clearly understand that an error has occurred. The notification could be a very clear error message, an alerting sound or a mixture of different formats. However, the end result should be that the user is able to recognize an error.
-
9.2.
Understanding errors: After learning that an error has occurred, users should be able to determine the exact error. They should be able to identify in which part of the system the error occurred and the nature of the error. By knowing all this information, users can more easily resolve the issue. For example, instead of showing a generic message that says there is an error when a user enters an existing user name, the system should tell the user that the error occurred in the user name field because the user entered an existing user name.
-
9.3.
Recovering from errors: In many cases, just telling users what the error is can make it easy for them to rectify it. However, in other cases, the notification might not be enough, and the user might need additional instruction on how to recover from the error. Therefore, specifically in cases in which the solution is not intuitive, instructions should be provided to the user on how to rectify the issue.
-
9.1.
-
10.
Help and documentation: This heuristic has two interrelated ideas:
-
10.1.
Help: Providing help involves providing a direct interaction with the system support team to resolve any issues that the users face. Many people still prefer to interact with someone rather than reading a manual or documentation. Therefore, users should be provided with the former option. Moreover, regardless of how comprehensive the documentation is, it cannot cover all the issues that the users might face. Therefore, there should be ways that the users can contact the support team by chat, phone number, email, etc.
-
10.2.
Documentation: The documentation serves as the guide or manual of the system; it should explain the system and cover most of the issues that users might have while using the system. There should be contextual documentation that users can find in the places where difficulties are expected, such as documentation accessed with a question mark button. In addition, there should be general documentation that addresses the whole system, such as frequently asked questions (FAQs) and tutorials.
-
10.1.
Table 1 shows a high level overview of the detailed version of Nielsen’s heuristics.
5 Discussion
The main goal of this work was to better equip novice evaluators to conduct HE. One of the main advantages of HE is that it does not require many resources to be performed. The briefness of the usability heuristics is another advantage, as it makes the heuristics easy to recall so evaluators do not have to have the list with them every time they decide to conduct an evaluation. However, the briefness also introduces a problem, especially for novices, who might not fully grasp the different aspects of any given heuristic. Therefore, in this work, we aimed to balance the two extremes. Our goal was not to examine the completeness of Nielsen’s heuristics and try to extend the list by adding new heuristics to the existing heuristics because in doing so, we might enhance the completeness of the list, but we might also make it difficult to digest. At the same time, we realized that the current version of the list of heuristics is not easy to understand and that keeping it that way will affect the quality of novices’ HE results. Therefore, our objective was to focus only on the current heuristics and try to break them down and explain the different concepts and ideas that each heuristic contains. Although the detailed description of the heuristics requires more reading and effort on the part of evaluators, it will potentially enhance their understanding of the heuristics, which will potentially improve the quality of their evaluation. The main target audience of this work is novices, but experts can benefit from it as well. During the interviews, we noticed that not all experts have the same comprehensive understanding of the heuristics or are able to fully explain each heuristic. Therefore, this detailed list might help some experts enhance their understanding of heuristics. Finally, this list should be tested and compared to the original list to determine whether it actually enhances evaluators’ understanding of them.
6 Conclusion
Usability is one of the main concerns of HCI. Therefore, usability experts evaluate systems to ensure their usability. One of the most popular methods to evaluate the usability of systems is HE. HE is a method in which 3–5 evaluators evaluate a system guided by a list of usability heuristics. Despite its popularity, HE is not as effective when used by novice evaluators as it is when used by expert evaluators. One of the reasons that makes HE less effective for novices is that usability heuristics are abstract to some degree, which makes fully understanding them challenging. Our goal was to simplify the usability heuristics by explaining each heuristic in detail to capture the different concepts that each heuristic encompasses. We chose Nielsen’s usability heuristics since they are the most recognized usability heuristics. We interviewed 15 usability experts with at least 4 years of experience in the field from both academia and industry. We asked them to explain each of the heuristics; give examples; and describe their significance, their applicability and the consequences of ignoring them. Then, we analyzed their responses and produced a more detailed version of Nielsen’s list of usability heuristics.
In future work, we would like to pursue two directions. First, we want to test whether, compared to Nilesen’s original list, this detailed version enhances novices’ understanding of the heuristics. Second, we want to examine whether a better understanding of usability heuristics leads to better evaluation performance.
References
Nielsen, J.: Usability 101: Introduction to usability. https://www.nngroup.com/articles/usability-101-introduction-to-usability/
Rosenbaum, S., Rohn, J.A., Humburg, J.: A toolkit for strategic usability: results from workshops, panels, and surveys. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM Press, The Hague, Netherlands, pp. 337–344 (2000). https://doi.org/10.1145/332040.332454
Fernandez, A., Insfran, E., Abrahão, S.: Usability evaluation methods for the web: a systematic mapping study. Inf. Softw. Technol. 53, 789–817 (2011). https://doi.org/10.1016/j.infsof.2011.02.007
Nielsen, J.: How to conduct a heuristic evaluation. Nielsen Norman Group 1, 1–8 (1995)
Nielsen, J.: Guerrilla HCI: using discount usability engineering to penetrate the intimidation barrier. In: Bias, R.G., Mayhew, D.J. (eds.) Cost-Justifying Usability, pp. 245–272. Academic Press, Boston (1994)
Nielsen, J.: Finding usability problems through heuristic evaluation. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM Press, Monterey, California, USA, pp. 373–380 (1992). https://doi.org/10.1145/142750.142834
Slavkovic, A., Cross, K.: Novice heuristic evaluations of a complex interface. CHI’99 Extended Abstracts on Human Factors in Computing Systems, pp. 304–305. Association for Computing Machinery, Pittsburgh (1999)
Abulfaraj, A., Steele, A.: Coherent heuristic evaluation (CoHE): toward increasing the effectiveness of heuristic evaluation for novice evaluators. In: Marcus, A., Rosenzweig, E. (eds.) HCII 2020. LNCS, vol. 12200, pp. 3–20. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-49713-2_1
Cronholm, S.: The usability of usability guidelines: a proposal for meta-guidelines. In: Proceedings of the 21st Annual Conference of the Australian Computer-Human Interaction Special Interest Group: Design: Open 24/7. Association for Computing Machinery, Melbourne, Australia, pp. 233–240 (2009). https://doi.org/10.1145/1738826.1738864
de Lima Salgado, A., de Mattos Fortes, R.P.: Heuristic evaluation for novice evaluators. In: Marcus, A. (ed.) DUXU 2016. LNCS, vol. 9746, pp. 387–398. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40409-7_37
Rusu, C., Botella, F., Rusu, V., Roncagliolo, S., Quiñones, D.: An online travel agency comparative study: heuristic evaluators perception. In: Meiselwitz, G. (ed.) SCSM 2018. LNCS, vol. 10913, pp. 112–120. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91521-0_9
Al-Razgan, M.S., Al-Khalifa, H.S., Al-Shahrani, M.D.: Heuristics for evaluating the usability of mobile launchers for elderly people. In: Marcus, A. (ed.) DUXU 2014. LNCS, vol. 8517, pp. 415–424. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-07668-3_40
Paz, F., Paz, F.A., Pow-Sang, J.A.: Experimental case study of new usability heuristics. In: Marcus, A. (ed.) DUXU 2015. LNCS, vol. 9186, pp. 212–223. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-20886-2_21
de Salgado, AL., Amaral, LA., Freire, AP., Fortes, RPM.: Usability and UX practices in small enterprises: lessons from a survey of the Brazilian context. In: Proceedings of the 34th ACM International Conference on the Design of Communication. Association for Computing Machinery, Silver Spring, MD, USA, pp. 1–9 (2016). https://doi.org/10.1145/2987592.2987616
Hussein, I., Mahmud, M., Tap, AOM.: A survey of user experience practice: a point of meet between academic and industry. In: 3rd International Conference on User Science and Engineering (i-USEr). IEEE, Shah Alam, Malaysia, pp. 62–67 (2014). https://doi.org/10.1109/iuser.2014.7002678
Molich, R., Nielsen, J.: Improving a human-computer dialogue. Commun. ACM 33, 338–348 (1990). https://doi.org/10.1145/77481.77486
Nielsen, J., Molich, R.: Heuristic evaluation of user interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, Seattle, Washington, USA, pp. 249–256 (1990). https://doi.org/10.1145/97243.97281
Nielsen, J.: Usability heuristics for user interface design 10. https://www.nngroup.com/articles/ten-usability-heuristics/
Shneiderman, B.: Designing the User Interface: Strategies for Effective Human-Computer Interaction. Addison-Wesley Publishing Co., Reading, MA (1987)
Tognazzini, B.: First principles of interaction design (revised & expanded). https://asktog.com/atc/principles-of-interaction-design/
Gerhardt-Powals, J.: Cognitive engineering principles for enhancing human-computer performance. Int. J. Hum. Comput. Interact. 8, 189–211 (1996). https://doi.org/10.1080/10447319609526147
Mandel, T.: The golden rules of user interface design. The Elements of User Interface Design, pp. 1–28. John Wiley & Sons Inc, Hoboken (1997)
Atkinson, B.F.W., Bennett, T.O., Bahr, G.S., Nelson, M.M.W.: Development of a multiple heuristics evaluation table (MHET) to support software development and usability analysis. International Conference on Universal Access in Human-Computer Interaction, pp. 563–572. Springer, Berlin (2007)
Granollers, T.: Usability evaluation with heuristics. New proposal from integrating two trusted sources. In: Marcus, A., Wang, W. (eds.) DUXU 2018. LNCS, vol. 10918, pp. 396–405. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91797-9_28
Sohl, M.: Comparing two heuristic evaluation methods and validating with usability test methods: applying usability evaluation on a simple website. Student thesis, Linköping University (2018)
Hvannberg, E.T., Law, E.L.C., Lárusdóttir, M.K.: Heuristic evaluation: comparing ways of finding and reporting usability problems. Interact. Comput. 19, 225–240 (2007). https://doi.org/10.1016/j.intcom.2006.10.001
Desurvire, H., Caplan, M., Toth, J.A.: Using heuristics to evaluate the playability of games. CHI’04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria, 24–29 April 2004), pp. 1509–1512. ACM, New York (2004)
Inostroza, R., Rusu, C., Roncagliolo, S., Rusu, V.: Usability heuristics for touchscreen-based mobile devices: update. In: Proceedings of the 2013 Chilean Conference on Human - Computer Interaction. Association for Computing Machinery, Temuco, Chile, pp. 24–29 (2013). https://doi.org/10.1145/2535597.2535602
Alsumait, A., Al-Osaimi, A.: Usability heuristics evaluation for child e-learning applications. In: Proceedings of the 11th International Conference on Information Integration and Web-Based Applications & Services. Association for Computing Machinery, Kuala Lumpur, Malaysia, pp. 425–430 (2009)
de Salgado, AL., de Lara, SM., Freire, AP., de Fortes, RPM.: What is hidden in a heuristic evaluation: tactics from the experts. In: 13th International Conference on Information Systems & Technology Management - Contecsi. Contecsi, São Paulo, SP, Brazil, pp. 2931–2946 (2016). https://doi.org/10.5748/9788599693124-13CONTECSI/PS-4068
Alqurni, J., Alroobaea, R., Alqahtani, M.: Effect of user sessions on the heuristic usability method. Int. J. Open Source Softw. Process. (IJOSSP) 9, 62–81 (2018). https://doi.org/10.4018/ijossp.2018010104
Wodike, OA., Sim, G., Horton, M.: Empowering teenagers to perform a heuristic evaluation of a game. In: Proceedings of the 28th International BCS Human Computer Interaction Conference (HCI 2014) 28, pp. 353–358 (Sep 2014)
Salian, K., Sim, G.: Simplifying heuristic evaluation for older children. In: Proceedings of the India HCI 2014 Conference on Human Computer Interaction. Association for Computing Machinery, New Delhi, India, pp. 26–34 (2014)
Botella, F., Alarcon, E., Peñalver, A.: How to classify to experts in usability evaluation. In: Proceedings of the XV International Conference on Human Computer Interaction. Association for Computing Machinery, Puerto de la Cruz, Tenerife, Spain, p. 25 (2014)
Ericsson, K.A., Prietula, M.J., Cokely, E.T.: The making of an expert. Harv. Bus. Rev. 85(114–121), 193 (2007)
Hambrick, D.Z., Oswald, F.L., Altmann, E.M., Meinz, E.J., Gobet, F., Campitelli, G.: Deliberate practice: is that all it takes to become an expert? Intelligence 45, 34–45 (2014). https://doi.org/10.1016/j.intell.2013.04.001
Guest, G., Bunce, A., Johnson, L.: How many interviews are enough?: An experiment with data saturation and variability. Field Methods 18, 59–82 (2006). https://doi.org/10.1177/1525822X05279903
Blandford, A., Furniss, D., Makri, S.: Qualitative HCI research: going behind the scenes. Synth. Lect. Hum. Centered Inform. 9, 1–115 (2016). https://doi.org/10.2200/S00706ED1V01Y201602HCI034
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Abulfaraj, A., Steele, A. (2020). Detailed Usability Heuristics: A Breakdown of Usability Heuristics to Enhance Comprehension for Novice Evaluators. In: Stephanidis, C., Marcus, A., Rosenzweig, E., Rau, PL.P., Moallem, A., Rauterberg, M. (eds) HCI International 2020 - Late Breaking Papers: User Experience Design and Case Studies. HCII 2020. Lecture Notes in Computer Science(), vol 12423. Springer, Cham. https://doi.org/10.1007/978-3-030-60114-0_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-60114-0_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60113-3
Online ISBN: 978-3-030-60114-0
eBook Packages: Computer ScienceComputer Science (R0)