Healthcare Technology Management (HTM) Part 7-Heuristic Analysis Heuristic analysis

Page 1
Chapter 7. Heuristic analysis
Section 7.1. Setting the Stage
When a technology is poorly designed, it can lead people to make mistakes while interacting with it. In healthcare, this can be especially serious given the complexity of technology and what we need it to do. Many technologies in healthcare provide support life saving and supporting functions for complex patients with changing medical status. When this technology is poorly designed, it can be responsible for errors leading to patient safety events. Identifying technology designs, or aspects of design, that violate best practices for designing Human-tech[9] systems is a potentially life-saving undertaking. Heuristic analysis is one method by which technology design can be evaluated to determine whether users will find it challenging to operate.
Section 7.2. What is Heuristic analysis
A heuristic analysis is an analysis method whereby usability experts evaluate a design based on established “rules of thumb”. Historically, heuristic analyses were performed on human-computer interaction systems to evaluate software interfaces and determine whether such systems could be considered “usable”. Well-established guidance for designing good user interfaces has been developed by two leading experts, Nielsen, with his 10 Usability Heuristics for User Interface Design [32] and Schneiderman with his Eight Golden Rules of Interface Design [33]. More recently, Zhang [34] combined and tailored these design principles into 14 Usability Heuristics to facilitate the heuristic analysis of medical devices (Table 1).
During the analysis, design characteristics that violate one or more heuristics are identified. For each violation, the evaluator(s) identify what use problems will likely arise as a result of the violation and the potential impact of each use problem.
65
Page 2
Table 1. Adaptation of Zhang et al’s 14 Usability Heuristics for Medical Devices
66
Page 3
Section 7.3. Why use Heuristic analysis
A heuristic analysis can be used to quickly identify usability issues with a technology that could have potential safety implications for patients and staff. Heuristic analysis is often preferred because it requires relatively few resources to identify design issues in comparison to many other human factors methods, such as usability testing.
From the biomedical technology professional’s perspective, completing a heuristic analysis will be helpful for:
Evaluating whether a technology design violates established best
practices, which could increase the chance of a use error, and have a negative effect on patient safety
Comparing the design of two similar technologies
Predicting the types of use errors likely with a particular device design
Suggesting device design improvements to vendors to make their
products safer for your patients and other hospitals
Section 7.4. When to use Heuristic analysis
A heuristic analysis should be included as part of every technology procurement process. Before selecting a technology for implementation at your healthcare institution, it is crucial to ensure the device will not promote use error as a result of the way it has been designed. Having a sense of the design issues associated with a technology before choosing to implement it can help you to make a more informed decision, especially in the event you are comparing similar products. If several products are being considered as part of a procurement process, a heuristic analysis can help in reducing the total number of products that move forward as part of the procurement process, provided specific usability criteria are outlined in the request for proposals (see Chapter 11 for more on human factors in procurement). If heuristic violations are identified in advance of a procurement decision you may have some leverage to suggest improvements to the vendor, and your healthcare institution has the opportunity to identify other types of mitigating strategies that can be implemented from within the organization.
After a near miss, or adverse event, heuristic analysis can be used to determine whether any design features of the device may have contributed to the incident. If a heuristic analysis does uncover design issues, immediate action should be taken to prevent a similar incident from happening to someone else. When it comes to issues with technology design it is important to note that training people to overcome poor design is not effective. For more information about the effectiveness of different mitigating strategies, see Section 3.5.
67
Page 4
Finally, if you ever design technology solutions as part of your role as a biomedical technology manager, having someone else conduct a heuristic analysis on your design provides an excellent opportunity to minimize use error as a result of a heuristic violation. Heuristic analysis done early in a user centred design process can streamline the development of your solution while ensuring it will meet user needs.
Section 7.5. In Preparation for Heuristic analysis
Section 7.5.1. Become Familiar with the Device
In preparation for a heuristic analysis, you should first become familiar with the technology by interacting with it to learn about its purpose, settings, screens, modes of operation, and any interfacing components.
In addition to learning about the device itself, it will also be important to understand the tasks that will be carried out with the technology. If the technology to be evaluated is already used in the field, it is highly recommended that observations and interviews be completed to learn about how it is typically used. If the device is not presently used in the field, try to observe and interview staff using a device having a similar purpose to the technology of interest, or try to observe in a different environment (e.g., another facility) where the device is currently being used. This information will be important for outlining a list of tasks that assessors will walk through as they complete their heuristic analysis. In addition to observation, reviewing the product manual and instructions for use is helpful for understanding the intended capabilities of the product. Doing this before observations will allow you to look for evidence of which features and functions of the device are utilized when you are conducting observations.
Section 7.5.2. Create a Task List
Using the observations and any interview data, create a step-by-step list or description of the tasks that are carried out with the technology. The tasks should include all tasks performed by all user groups, especially any safety critical or worst case scenario tasks. It is recommended that observations (Chapter 4) be conducted to support the development of the task list. If there is more than one user group of the technology, make sure to include all the tasks done by each unique user group as part of your task list. This task list will be used to guide each evaluator step-by-step through their heuristic analysis. It is important to consider the full range of tasks in the heuristic analysis since it is much easier to do so with this method than other human factors evaluation methods such as usability testing.
Section 7.5.3. Identify Your Evaluators
Once you are familiar with the technology to be evaluated and the tasks that are commonly performed, you will want to identify your evaluators. Ideally, people having knowledge of both the work being performed and human factors should be included as
68
Page 5
evaluators for a heuristic analysis. However, if this is not possible, a combination of evaluators with either one of these areas of expertise can be included instead. Try to provide your evaluators with some time to familiarize themselves with the usability heuristics and severity rating scheme prior to conducting an analysis. If possible, have inexperienced evaluators practice applying the usability heuristics and severity rating scheme to a different device or object prior to carrying out the technology evaluation. Usually between three and five people should independently complete a heuristic evaluation to identify as many usability issues as possible given the objectives and resources available for your evaluation [35].
Section 7.5.4. Develop a Severity Rating Scale
The aim of the evaluation is to identify design issues that have the potential to result in safety and usability problems. For each safety and usability problem identified, a severity rating should be assigned to help identify the high priority issues and to help facilitate a comparison across products, if the analysis is being done to support a comparative evaluation. Table 2 shows a severity rating scale that was adapted based on Zhang et al’s work. However, each heuristic analysis should include the development of a rating scale that most appropriately categorizes the types of risks encountered for the technology being evaluated. Table 3 shows a severity rating scale that incorporates both safety and usability concerns and is divided into only 3 severity categories: low, medium and high.
Table 2. Severity scale adapted from scale presented in Zhang et al (2003).
Severity
0 Not a usability problem.
No fix required.
1 Cosmetic problem only.
Need not be fixed unless extra time is available.
2 Minor usability problem.
Fixing this should be given low priority.
3 Major usability problem.
Fixing this is important and should be given high priority.
4 Usability catastrophe.
Fixing this is imperative and must be done before product can be released.
Description
69
Page 6
Table 3. A severity rating scale that incoporates both usability and safety concerns.
Severity
1 Low Severity: An issue that may be mildly frustrating to the user. 2 Medium Severity: A serious issue that may be very frustrating to the user and/or makes
it difficult for the user to complete the task correctly and efficiently.
3 High Severity: A critical issue that may be highly detrimental to the user’s ability to
interact with the system and/or has potential for causing patient harm.
Section 7.5.5. Prepare an Evaluator Reference Sheet and a Data Collection Template
Description
Prepare a reference sheet for each evaluator that provides them with the framework for the analysis. This should include the following:
A list of the 14 usability heuristics and definitions.
The severity rating scale.
The list of tasks they should perform to guide their interaction with the technology
as they look for issues.
In addition to a reference sheet, evaluators should be given a template to record their findings, especially in the event an isolated heuristic analysis approach is desired, to ensure each evaluator provides adequate detail as part of their analysis. These materials should be given to the evaluators in advance so they have time to get familiar with them before the evaluation. Consider including space for assessors to provide four key pieces of information for each usability problem:
1. Where the usability issue occurred in the interface
When assessing a software component of the technology, this would be the screen where the violation exists. When assessing a hardware component of the technology, it is the physical location where a violation exists.
2. A description of the usability problem
For either a software or non-software application on a technology, a description of the violation in words to help differentiate between similar violations uncovered through the heuristic analysis.
3. A description of the potential consequences of the issue (if known)
A description of the potential impact of the issue on the user. For example the issue “it is not clear what the menu option ‘loading dose’ means” may have the impact “users may not be able to figure out how to administer what they refer to as a ‘bolus dose’ causing a delay in administering pain medication to the patient”.
70
Page 7
4. The violation code for the usability problem (optional)
Including space for assessors to provide the number and letter combination representing the type of violation found is optional. In some cases, assessors may find usability issues that are not overtly included in Table 1, so if you do provide space for the assessor to write the violation code, ensure they are aware they should still include any violations identified that are not specifically listed in the table.
5. The severity of the usability problem
The severity rating assigned to each violation based on the severity rating scale developed for the analysis.
A sample template to collect these four key pieces of information is shown in Table
4.
Table 4. Example of a data collection template for evaluators
Violation
Code
1b Starting screen Use of colour (red and green
1d Multiple fonts/inconsistent font
2a Unsure of system state (not sure
… …
Location of
Usability Issue
Description of Usability Issue Description of
Potential
Consequences
User may may be difficult to see if colour blind)
use on screen
if system is starting up on its own or if it is waiting for an input from me)
select wrong
folder
User interprets
fonts to have
an implied
meaning when
they do not.
Frustrating to
users.
User presses
buttons while
waiting and
makes
selections on
the next screen
without
knowing what
they have
selected
Severity
Rating
3
2
2
It is not necessary that a numeric score be allocated. Depending on the purpose of
the heuristic analysis and how well the implications of the issues are understood, a
71
Page 8
qualitative rating such as high, medium, and low severity (establish definitions for each that are relevant to the technology being evaluated) can be used. The benefit of a qualitative severity rating scale is that it reduces the likelihood that decision makers who are considering the results of the heuristic analysis (i.e., in a technology selection decision) will place a greater emphasis on the heuristic analysis results since than results from other human factors methods (e.g., usability testing) because it is easier to compare quantitative data than qualitative data.
Section 7.5.6. Decide on the Format for Heuristic analysis
A heuristic analysis can be organized in one of two ways: either the evaluator completes their evaluation in isolation; or the evaluator completes their analysis during a facilitated session while the facilitator observes the evaluator completing tasks and notes any issues, concerns, and preferences identified by the evaluator and later finalizes the evaluation by coordinating the observations with the heuristics. When assessors complete a heuristic analysis in isolation, it can help to reduce opportunities for unintentional bias to be introduced, including through interaction with the facilitator. Compiling information about violations and observations from each assessor, however, can be more resource intensive for the person in the facilitator role. When assessors complete a heuristic analysis during a facilitated session it may be easier for the facilitator to compile information about violations and observations because of the additional context provided by seeing evaluators go through each task. It is more likely however, that bias would be unintentionally introduced through interaction between the facilitator and evaluators. The approach you use will likely depend on the project objectives and resources available. Additional information about selecting an approach can be found in Nielsen’s How to
conduct a heuristic analysis (Section 7.9 Additional Resources).
Section 7.6. Completing a Heuristic analysis
For either an individual isolated heuristic analysis or a facilitated session, ensure the evaluator has all the information they need, as well as access to the technology being evaluated. Each assessor should use the reference sheet with the 14 usability heuristics and severity rating scheme, along with the task list, and data collection sheet to independently evaluate the technology design. Any violations or observations should be recorded on the data collection sheet by the evaluator or the facilitator, depending on how the heuristic analysis has been set up.
The evaluator should go through at least two rotations of the task list; first to become familiar with the device and to indicate any initial impressions and/or violations, and a second time to identify any violations that may have been missed during the initial review.
72
Page 9
When describing the consequence of a violation or problem, it is important to do so in relation to the goals or the purpose of the technology. For example, if an intravenous infusion pump is being evaluated, its purpose is to support the administration of the correct: medication, dose, rate, route, time, etc. So the consequences of the issues identified should be identified in terms of their impact on these functions (e.g., wrong dose (too high), wrong dose (too low), delay in medication administration, etc), unless they are general ease of use issues in which case the consequence may be user frustration. Describing consequences in terms of the goals or purpose of the system makes it easier to assign severity ratings since the severity rating should be the same for all issues resulting in the same consequences.
Section 7.7. What to do with a Completed Heuristic Analysis
The goal of a heuristic analysis is to produce a single report that outlines all the issues identified, their potential consequences and the severity of those consequences to support one or more of the following aims:
6. To identify whether a health technology is likely to be safe and easy to use, provided
there is a good fit between the device and the context of use (fit needs to be assessed
using other methods such as usability testing)
7. To compare the relative safety and usability of two or more products
It is important to keep in mind that the primary focus of a heuristic analysis is on identifying and describing the issues, rather than identifying the correct heuristic violation that is causing the issue. The heuristics are a means to identifying issues, not the issues themselves.
Once each evaluator has completed their heuristic analysis, all the data must be collated into a single list of issues with a consequence description and a severity score assigned to each issue. This process is most efficiently done if one person inputs the data into a single spreadsheet and then all the evaluators come together to discuss each issue until the following is established:
The usability problem description is clear and unique from all others
The consequences are stated in terms that relate to the goal or overall
function of the technology
A single severity rating is assigned. Note, if consensus cannot be reached, you
may want to use the weighted average of each evaluators score.
.
73
Page 10
Once severity ratings have been determined for each usability issue, the data collection spreadsheet should be organized so the most severe violations are highlighted. Generating a list of recommendations or proposed actions to address each severe violation may be helpful depending on the context of the heuristic analysis. If possible, those violations found to be severe should be addressed immediately according to the identified recommended actions.
If the heuristic analysis has been done to either proactively or retrospectively identify whether an in-house technology has usability issues, concerns that have been identified with the technology design should be addressed. As stated in Section 7.4, it is important to note that training people to overcome a violation in design is better than doing nothing, but is not a very effective solution. Similarly reliance on warning decals will not effectively mitigate the issues. If device-oriented changes are possible they will be more effective. Also, system changes that help to minimize the likelihood, and severity and/or improve detectability are recommended. For more information about the effectiveness of different mitigating strategies, see Section 3.5.
If the heuristic analysis has been done for procurement purposes, the results of the heuristic evaluation can be used to determine whether any of the contending devices should be eliminated early on in the selection process as a result of any unfixable, catastrophic design flaws that have been identified before usability testing is done.
Section 7.8. Limitations of Heuristic Analysis
Although heuristic analyses are extremely useful for identifying usability issues with a technology, there are also several limitations to consider.
Section 7.8.1. Informal Evaluation Method
A heuristic analysis is not a systematic method, and is limited in that the only usability issues that will be detected are those encapsulated by the heuristics themselves. If a particular device design issue falls outside of the 14 heuristics, it is unlikely to be identified through a heuristic analysis. Further, assigning a severity score to each usability issue tends to be a subjective exercise. For these reasons, a heuristic analysis is generally considered to be an informal evaluation method.
Section 7.8.2. Multiple Assessors Are Required
Having a single assessor conduct a heuristic analysis will not uncover all the usability issues with a technology design. Since people have their own unique perspectives and experiences, different people will uncover different usability issues as they interact with a technology. Increasing the number of assessors, therefore, will increase the proportion of usability issues identified through a heuristic analysis. According to Nielsen, a single evaluator is likely to uncover only about 35% of the usability problems with a
74
Page 11
technology design [35]. When the number of evaluators is increased to five, however, you can expect about 75% of the usability issues to be identified.
The relationship between the number of assessors and the proportion of usability issues identified is not linear, and so only minimal benefit will be seen as the number of evaluators is increased from five, to ten or 15. For this reason, and to help control costs, as a guideline it is recommended that between three and five evaluators be included when completing a heuristic analysis. Another possible approach would be to stop evaluating the technology once issue saturation has been reached, whereby subsequent independent assessors are not able to identify any more unique design issues. When evaluators uncover very different issues, and there is little consistency or overlap among the heuristic violations found, this is a key indicator that additional evaluators should be included. Results from a heuristic analysis should be treated with caution if there is little consistency in terms of issues found among those evaluating the technology or system..
Section 7.8.3. Experienced Assessors Should be Involved
Ideally, usability experts, such as human factors professionals, should carry out heuristic analysis because they are trained to see issues that violate best practice design principles. Additionally, subject matter experts (e.g., clinicians) should be included for their understanding of the processes that will be undertaken with the technology being evaluated. Pairing a usability expert and a subject matter expert for each evaluation can be an effective means of identifying a wider range of issues from each evaluation. If non­usability experts will be involved instead, it is recommended that prior to undertaking a heuristic evaluation, you dedicate some time to becoming familiar with and practicing how to apply the 14 heuristics to different devices. For less experienced evaluators, you may also want to consider including even more evaluators in an assessment than you would for experienced evaluators in order to improve the likelihood of uncovering usability issues.
Section 7.8.4. Technology is Evaluated in Isolation
Another limitation of a heuristic analysis is that the assessment is typically done on a technology in isolation, without considering the users, processes, or environments where that device will be used. As a result, some usability issues may only come to light once the technology is considered in the context of the system of use. For example, when evaluating an infusion pump in a well-lit office, a usability expert may not detect any issues with the contrast between the text and background, but when nurses use that same pump at night in the ICU, the text is found to be quite difficult to read. To help overcome these challenges, in addition to a heuristic analysis, usability testing is also highly recommended.
75
Page 12
Section 7.9. Additional Resources
Journal Articles
" Using usability heuristics to evaluate patient safety of medical devices” by
Jiajie Zhang et al.: Journal of Biomedical Informatics, 2003. 36: p. 23-30.
" “Human factors engineering: A tool for medical device evaluation in
hospital procurement decision-making” by Gill Ginsburg: Journal of Biomedical Informatics, 2005. 38(3): p. 213-219.
Websites
" The Nielsen Norman group for heuristic assessment
http://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/
76
Loading...