[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2020 Jul 4;27(8):1252–1258. doi: 10.1093/jamia/ocaa098

The tradeoffs between safety and alert fatigue: Data from a national evaluation of hospital medication-related clinical decision support

Zoe Co o1,, A Jay Holmgren o2, David C Classen o3, Lisa Newmark o4, Diane L Seger o4, Melissa Danforth o5, David W Bates o1,o4,o6
PMCID: PMC7647300  PMID: 32620948

Abstract

Objective

The study sought to evaluate the overall performance of hospitals that used the Computerized Physician Order Entry Evaluation Tool in both 2017 and 2018, along with their performance against fatal orders and nuisance orders.

Materials and Methods

We evaluated 1599 hospitals that took the test in both 2017 and 2018 by using their overall percentage scores on the test, along with the percentage of fatal orders appropriately alerted on, and the percentage of nuisance orders incorrectly alerted on.

Results

Hospitals showed overall improvement; the mean score in 2017 was 58.1%, and this increased to 66.2% in 2018. Fatal order performance improved slightly from 78.8% to 83.0% (P < .001), though there was almost no change in nuisance order performance (89.0% to 89.7%; P = .43). Hospitals alerting on one or more nuisance orders had a 3-percentage-point increase in their overall score.

Discussion

Despite the improvement of overall scores in 2017 and 2018, there was little improvement in fatal order performance, suggesting that hospitals are not targeting the deadliest orders first. Nuisance order performance showed almost no improvement, and some hospitals may be achieving higher scores by overalerting, suggesting that the thresholds for which alerts are fired from are too low.

Conclusions

Although hospitals improved overall from 2017 to 2018, there is still important room for improvement for both fatal and nuisance orders. Hospitals that incorrectly alerted on one or more nuisance orders had slightly higher overall performance, suggesting that some hospitals may be achieving higher scores at the cost of overalerting, which has the potential to cause clinician burnout and even worsen safety.

Keywords: patient safety, quality of care, electronic health record, computerized physician order entry, burnout

INTRODUCTION

Adoption of electronic health records (EHRs) equipped with computerized physician order entry (CPOE) has become increasingly widespread since the Health Information Technology for Economic and Clinical Health Act was passed in 2009.1 A major benefit of installing an EHR with CPOE is its medication related clinical decision support (CDS) features, which enable prescribers to access patient data at the point of care and can aid in safely and accurately ordering medications.2,3 Specifically, CDS can provide prescribers with advice and warnings about the medications they are ordering, to prevent medication errors and subsequent adverse drug events from occurring.3–6

Although these systems have been shown to prevent medical errors,5 evidence also suggests that they do not always result in reduced harm.7–9 CPOE systems can have unintended consequences, such as introducing new types of errors.10 They can also cause alert fatigue, resulting in missed alerts,10 and can contribute to clinician burnout.11

EHRs are highly configurable, and most of this configuration takes place at the facility level. Yet, it is difficult for organizations to track what decision support has been implemented, and whether it is working as intended.12 Therefore, it is crucial that there are evaluations that hospitals can take to assess how their systems perform against common and serious prescribing errors.13,14 In 2009, the CPOE Evaluation Tool was implemented as a part of the Leapfrog Group’s annual hospital survey, which hospitals use to evaluate the safety of their EHR. The tool is a timed online assessment, in which hospitals are provided with a set of test patients to program into their EHR, and associated test medication orders to enter on each patient using CPOE. They then record any decision support they may have received, and the tool provides immediate feedback to hospitals in the form of an overall score, and individual scores for each order checking category. The test cases have been taken from actual practice and some are situations in which patients died or suffered serious harm related to a medication error.

Past reports regarding the tool have found that most hospitals have basic decision support (ie, drug allergy checking) features implemented, while more advanced decision support (ie, cumulative dose checking) features need improvement.15,16 However, 2 important subcategories of the test have not previously been assessed: fatal orders and nuisance orders. Fatal errors are ones that have killed a patient previously, and we include preferentially those that have resulted in multiple fatalities. Nuisance orders are medication combinations that should not cause an alert to fire, such as those that were once thought to be potentially dangerous but have been shown to be safe.

In this study, we sought to answer 3 research questions. First, are hospitals improving their CDS-enabled CPOE performance, including preventing potentially fatal adverse drug events and reducing inappropriate nuisance alerts? Second, do hospitals that perform poorly overall have adequate safeguards for fatal orders? Finally, do hospitals that perform well overall achieve their success by having more overall alerts, including inappropriate nuisance orders that may cause alert fatigue? To address these questions, we analyzed the performance of hospitals which took the test in 2017 and 2018.

MATERIALS AND METHODS

History of the tool

The CPOE Evaluation Tool is an assessment tool designed by investigators at the University of Utah and Brigham and Women’s Hospital and is administered by the Leapfrog Group through their annual hospital survey. The Leapfrog Group is an organization formed in 2000, whose focus is making “great leaps forward” in the safety and quality of care,13 through publicly reporting on the safety of hospitals. They have a set of national safety standards, and early on, CPOE was identified as one of them.17 To achieve a “full demonstration of national safety standard for decision support” on the test (the highest a hospital can have), hospitals must enter at least 85% of their medication orders using CPOE and must correctly alert on 60% or more of the medication orders in the test (Supplementary Table 1).18 In this article, we refer to these hospitals as “full demonstration hospitals.”

A past evaluation of the tool showed that better performance correlated with a decreased rate of adverse drug events,4 hospitals’ rate of preventable adverse drug events decreased by 43% for every 5% increase in score. In another study,15 results of the CPOE tool showed that most hospitals have basic decision support tools implemented, while advanced decision support tools were far less implemented. More recently, Holmgren et al19 found that hospitals that took the test multiple times between 2009 and 2016 showed improvement in their overall scores, suggesting that consistent evaluation of hospital performance against medication errors may lead to improvement in medication safety.

How hospitals take the tool

To take the test, hospitals are provided with a list of test patients, and they program these individuals into their EHR. Included with each patient is their demographic information, a diagnosis, a list of laboratory values, and a list of allergies. Along with the test patients is a set of associated test medication orders. These test orders are organized into ten order categories, covering both basic and advanced decision support (Supplementary Table 2).2 Licensed prescribers enter these test orders for each test patient using their CPOE system and record the decision support they receive (if any). This decision support can include pop-up alert messages as well as “hard stops” that prevent the order from being entered.

The fatal and nuisance medication orders are distributed across pre-existing order categories. Fatal orders are distributed across the drug dosing (both daily and single), drug laboratory, drug route, and drug-drug interaction categories, and are included in the calculation of the overall score. Nuisance orders are inconsequential medication combination orders (drug-drug interaction or therapeutic duplication orders) that are low severity and can be presented noninterruptively to reduce the number of alerts prescribers see, which can help in reducing alert fatigue.20 These are not included in the final overall percentage score.

After hospitals finish the test, they receive immediate feedback in the form of an overall percentage score, and individual order category scores. The denominators for these scores are based on the number of test orders that hospitals can electronically order. For example, if a hospital does not have a drug available in their formulary, that test order is removed from their numerator and denominator. Given this, the overall score and category scores are calculated by taking the number of orders appropriately alerted on, divided by the total number of electronically orderable orders. Hospitals receive more detailed immediate feedback on the fatal and nuisance orders, in which the exact fatal orders that they failed to alert on during the test, and the nuisance orders their system should not have alerted on, are provided to them for review.

Also built into the test are deception orders. These orders are meant to prevent the gaming of the test, in that they are completely normal and safe medication orders. When hospitals alert on 2 or more deception orders, their test is automatically scored as an “incomplete” and cannot take the test for the next 120 days.18

Analysis

Detailed datasets containing the results from the 2017 and 2018 CPOE Evaluation Tool were extracted for analysis. We chose to use a balanced panel of hospitals who took the test in both years to examine hospital improvement over time. All incomplete tests were removed, and if hospitals took the test twice in one year, only their highest score was kept. To report on hospital characteristics, we merged our data with data from the 2017 American Hospital Association (AHA) Annual Survey. The specific characteristics we evaluated were drawn from prior literature on hospital information technology adoption and performance21 including hospital size, healthcare system membership, whether a hospital is teaching hospital, hospital location (urban vs rural), what region hospitals are in, and hospital ownership. Our final analytical dataset included the 1599 unique hospitals that took the test in both years, uniquely identified at the hospital-year level.

We first calculated summary descriptive statistics of the demographic characteristics of hospitals in our sample. Next, we calculated the means for overall score, fatal order score, and nuisance order score in 2017 and 2018 for all hospitals in our sample. We used 2-sided t tests with unequal variances to compare the 2017 and 2018 values of overall scores, fatal scores, and nuisance scores. We then plotted the relationship between fatal score and overall score, as well as the relationship between nuisance score and overall score using scatterplots. For both scatterplots, the x-axis contains the fatal order score or nuisance order score, while the y-axis contains the overall score. Each data point on the fatal order scatterplot represents a hospital’s overall score and their fatal order score. Similarly, each datapoint on the nuisance order scatterplot represents a hospital’s overall score and their nuisance order score. Both these scatterplots also include a linear line of best fit. Finally, we ran a multivariate linear regression with hospital overall score as our dependent variable and a binary indicator of whether a hospital failed one or more nuisance orders as the independent variable of interest, including our hospital demographics as controls. Our model also used hospital random effects and robust standard errors clustered at the hospital level.

This study was approved by Brigham and Women’s Hospital’s institutional review board (2014P001614) as an exempt protocol.

RESULTS

Between 2017 and 2018, 1599 hospitals took the test in both years. Of these hospitals, we identified 1188 hospitals that responded to the AHA Survey. The majority (61%) of hospitals are medium-sized hospitals, between 100 and 399 beds, while 21% are large hospitals with 400 or more beds, and 18.3% are smaller hospitals with fewer than 100 beds (Table 1).

Table 1.

Sample hospital demographics

Hospital size
Small hospitals with fewer than 100 beds 217 (18.3)
Medium sized hospitals with 100-399 beds 723(60.9)
Large hospitals with 400+ beds 248 (20.9)
System Membership
Nonsystem hospital 189 (15.9)
System member hospital 999 (84.1)
Teaching status
Nonteaching hospital 511 (43.0)
Teaching hospital 677 (57.0)
Hospital location
Located in an urban area 1049 (88.3)
Located in a rural area 139 (11.7)
Region
West 310 (26.1)
Midwest 217 (18.3)
South 425 (35.8)
Northeast 236 (19.9)
Hospital ownership
Public, nonfederal hospital 127 (10.7)
Private, nonprofit hospital 789 (66.4)
Private, for-profit hospital 272 (22.9)

Values are n (%). Because not all hospitals in our sample responded to the AHA Survey, the denominator was limited to 1188 hospitals

Most (84.1%) of the hospitals in our sample belong to a healthcare system, while 16% do not. Over half (57.0%) are teaching hospitals, and 43.0% are nonteaching hospitals. The majority (88.3%) of hospitals are in an urban area, while 11.7% of hospitals are in rural areas. Our sample includes hospitals from all four regions of the country; 35.8% are in the South, 26.1% are in the West, 19.9% are in the Northeast, and 18.3% are in the Midwest. Last, most (66.4%) hospitals are private, nonprofit hospitals; 22.9% of hospitals are private, for-profit hospitals; and 10.7% of hospitals are public, nonfederal hospitals.

The overall performance of hospitals improved from 2017 to 2018. The mean score in 2017 was 58.1%, and the mean score in 2018 was 66.2% (Figure 1). Fatal order performance also improved, in which hospitals appropriately alerted on 78.8% of fatal orders in 2017, and 83.0% in 2018 (P < .001). However, hospital performance for nuisance orders remained essentially unchanged between the 2 years—in 2017, hospitals correctly did not alert on 89.0% of nuisance orders, and in 2018, they did not alert on 89.7% of nuisance orders (P = .43).

Figure 1.

Figure 1.

Overall computerized physician order entry performance from 2017 to 2018: overall, fatal orders, and nuisance orders. The P value for overall score and fatal order score is P < .001. The P value for the nuisance order score was P = .43.

Next, we assessed the relationship between overall hospital performance and fatal order scores. From our analysis, we found that there is an overall linear relationship between the 2 variables (Figure 2). Hospitals that perform well overall tend to also appropriately alert on almost all the fatal orders, in which the mean overall score of hospitals who correctly alerted on all their fatal orders was 70%. In contrast, hospitals that do not perform well overall also tend to not perform well for fatal orders, in which the mean overall score of hospitals who did not alert on any fatal orders was 39%.

Figure 2.

Figure 2.

Scatterplot showing the positive relationship between overall scores and fatal order scores. Each data point represents hospital’s overall score and their fatal order score.

We ran a similar analysis between overall scores and nuisance order scores (Figure 3). Unlike fatal order scores, a high nuisance order percentage score indicates that the hospital did not alert on many nuisance orders, while a low nuisance order percentage score indicates that a hospital alerted on several nuisance orders. Full demonstration hospitals have variable performance against nuisance orders, in which some do not alert on any, while some alert on all or most of the nuisance orders. Hospitals that alerted on all the nuisance orders had a mean overall score of 74%, while hospitals that did not alert on any nuisance order had a mean overall score of 62%. There is a statistically significant negative relationship between overall performance and nuisance order performance—hospitals with higher overall scores were more likely to inappropriately alert on nuisance orders. In our multivariate linear regression analysis, we found a strong association between hospitals’ nuisance order performance and their overall performance. Hospitals who alerted on more than 1 nuisance order scored on average 3.19% higher on their overall score compared with hospitals that did not alert on at least 1 nuisance order, even when controlling for observable hospital characteristics (Table 2).

Figure 3.

Figure 3.

Scatterplot showing the negative relationship between overall scores and nuisance order scores. Note, a high nuisance order score indicates that a hospital did not alert on any or very few nuisance orders, while a low nuisance order score indicates that a hospital incorrectly alerted on several nuisance orders. Each data point represents a hospital’s overall score and their nuisance order score.

Table 2.

Multivariate regression analysis showing the association between failing 1 or more nuisance orders and overall score, while controlling for observable hospital characteristics

Coefficient P value [95% CI]
Nuisance order failure
Failed 0 nuisance orders Reference
Failed 1+ nuisance order 3.19 .00 1.93 to 4.45
Hospital demographics
Hospital size
Small hospitals fewer than 100 beds Reference
Medium-sized hospitals 100-399 beds 0.61 .60 −1.66 to 2.88
Large hospitals with 400+ beds 0.76 .61 −2.18 to 3.69
System membership
Nonsystem hospital Reference
System member hospital 0.41 .73 −1.96 to 2.79
Teaching status
Nonteaching hospital Reference
Teaching hospital 0.86 .35 −0.94 to 2.67
Hospital location
Located in a rural area Reference
Located in an urban area 3.04 .03 0.35 to 5.74
Region
West Reference
Midwest −4.17 .00 −6.51 to -1.82
South −6.44 .00 −8.96 to -3.91
Northeast −4.09 .00 −6.44 to -1.75
Hospital ownership
Public, nonfederal hospital Reference
Private, nonprofit hospital 0.59 .71 −2.51 to 3.70
Private, for-profit hospital −0.16 .87 −2.00 to 1.68

CI: confidence interval.

In addition, we modeled nuisance order scores flexibly with varying thresholds (Supplementary Table 3). The results of the analysis are consistent with our results from Table 2 but indicate that low percentages of incorrect nuisance orders (up to 25%) were not significantly associated with higher overall scores. Last, because we chose to use a balanced panel of hospitals who took the test in both years, we performed sensitivity analyses using an unbalanced panel of hospitals who took the evaluation in 2017, 2018, or both, and found results consistent with our balanced panel (Supplementary Table 4).

DISCUSSION

We evaluated hospital performance for hospitals which took the test in both 2017 and 2018 and found that overall scores increased by 8% between the 2 years, suggesting that repeated assessment of a hospitals’ CPOE system may lead to improvement in their overall scores in the test. Fatal order performance also improved, from 78.8% in 2017 to 83% in 2018, although hospitals that performed poorly on the test continued to do poorly for fatal orders. However, rates of performance for nuisance orders did not improve, and some hospitals with good performance appeared to do so at the expense of displaying alerts for more than half of the nuisance orders in their test.

These results have several implications. First, they confirm that hospitals taking the test multiple times tend to improve, as is intended. This result is consistent with previous work that reported on results of the tool from 2009 to 2016, in which our team found that hospitals that took the test multiple times performed better than hospitals taking it for the first time.19 Because hospitals receive immediate feedback after each test, they can immediately recognize order categories that they performed well in as well as ones that need improvement. This feedback allows hospitals to focus on improving specific areas of their CPOE system to improve quality and safety.

For fatal orders, our hope has been that hospitals would implement warnings for all these alerts. While hospitals are not told which individual items they get incorrect overall, they are provided with the specific fatal orders they get incorrect, with the intent that organizations will ensure these orders are detected in the future. We found that overall performance and fatal order performance have a linear relationship, in that full demonstration hospitals generally perform well for fatal orders and low-performance hospitals do not. These results suggest that hospitals may not be targeting the deadliest orders first with their CPOE systems, and that low overall performance is a likely predictor of low fatal order performance. These data also suggest that low-performing hospitals may need stronger incentives than they currently have if they are to perform better in this area. Many of these hospitals likely adopted CPOE relatively recently.

Similar to the fatal orders, the test provides hospitals with the nuisance orders that they missed. Between 2017 and 2018, there was almost no improvement in hospital performance against nuisance orders. In addition, the relationship between overall scores and nuisance order scores is negative, unlike the relationship between fatal order scores and overall scores. Nuisance order performance is quite variable among full demonstration hospitals, where some have perfect nuisance order scores, while others alert on all or most of the nuisance orders they are given. Most notably, hospitals that alerted on all the nuisance orders had a mean overall percentage score of 74%, which is much higher than the average overall score in both years (Figure 3). Furthermore, the multivariate regression analysis showed that nuisance order performance is related to overall hospital performance, in which hospitals that alerted on more than 1 nuisance order increased their overall score by 3%. These findings suggest that full demonstration hospitals may achieve their scores by turning on too many alerts in their system, including those that can cause alert fatigue. By turning on all the alerts in a CPOE system, the threshold for which alerts are fired from is very low.22 As a result, alerts of all severities fire, potentially leading to alert fatigue, which has played a prominent role in causing physician burnout.11,20 Some hospitals may not have a choice, though, as certain vendors only allow hospitals to control which alerts display at a “tier” level, so that they have to display all “level 2” alerts (which generally mean that the provider should be interrupted) or none. This can be problematic, as some alerts scored in the level 2 category are important, whereas others are not.

Overall, though, alert fatigue can have serious consequences for safety in that prescribers may miss important alerts if they receive overwhelming numbers of unnecessary or irrelevant alerts.23–25 Some full demonstration hospitals that perform poorly against nuisance orders may be achieving their scores at the cost of potentially causing alert fatigue, which may in turn adversely affect safety. Hospitals were provided with the exact nuisance order they alerted on since 2017, and it is unclear whether they will take action to reduce the number of nuisance orders their systems are alerting on. Given the issues described previously, organizations may need to work with their EHR vendors to refine which alerts are displayed.

The results of our study show that there was improvement in hospitals’ overall performance and fatal order performance but almost no improvement against nuisance orders. Although there was improvement in fatal order performance, it was minimal, and performance in this category should be better. Owing to the severity of these orders, it is expected that all hospitals would alert at least on all or most of these orders. In terms of nuisance order performance, there is evidence that some full demonstration hospitals may achieve their high scores by overalerting, which has serious implications on safety, as it can cause alert fatigue. As a result of these findings, nuisance orders will be included in the overall score in the next cycle of the assessment. To date, this category had not been included in scoring. By doing this, our intent is to encourage hospitals to take action to reduce overalerting.

Owing to the variability of hospital performance across all areas of the test, it is evident that how an EHR is implemented and used, is crucial to how hospitals perform on the test. Although we did not report on test performance by EHR vendor, past results of the tool have reported high variation in performance within EHR vendors, supporting the idea that simply having an EHR is not enough, and that implementation is far more important.15 As described previously, hospitals and vendors both play crucial roles in achieving high levels of performance. The overall results of this tool are useful in receiving a general understanding of how a hospital’s CPOE system performs against common and serious prescriber errors. To further improve on safety, hospitals should also focus on their performance against fatal and nuisance orders, as these areas are ones that have significant impacts on patient safety.

Limitations

Our study has several limitations. First, the results of the tool include only hospitals which took the CPOE Evaluation Tool in both 2017 and 2018. While this included about a third of U.S. hospitals, this group may not be representative of all hospitals in the United States. However, hospitals that self-select into the evaluation are likely to be those interested in quality improvement, suggesting that our results may be biased upward. Our sensitivity analysis shows that the effects of this bias were mild (Supplementary Table 4). In addition, using only two years’ worth of data are not enough to establish any trends. Analyzing these hospitals’ performance over several years would be needed to assess whether the relationships observed in this study are sustained. Additionally, some hospitals who took the CPOE Evaluation Tool did not respond to the AHA Annual Survey. Hospitals that did respond, had a higher overall and fatal score, but were not statistically significantly different in nuisance order scores (Supplementary Table 5). Next, we did not assess patient outcomes, as these were test patients, though previous work has shown a relationship between The Leapfrog CPOE score and actual rate of adverse drug events.4 In addition, the relationship between failing nuisance orders and overall scores is not necessarily causal. To determine if there is a causality, a randomized trial or a quasi-experimental observational study would have to be performed. Also, further research is necessary to explore the complicated trade-offs between nuisance alerting, which may cause alert fatigue leading to physician burnout, and underalerting, which may lead more directly to adverse drug events. Last, the fatal orders and nuisance orders in the test are not inclusive of all types of orders that may be observed in hospital settings. These orders belong to only some of the pre-existing order-checking categories in our test.

CONCLUSION

Although EHR and CPOE implementation has become widespread, significant improvement is still needed in terms of medication-related decision support performance. Hospitals which took the Leapfrog Group’s CPOE Evaluation Tool in both 2017 and 2018, showed improvement in their overall scores, and had modest improvement for fatal orders, but almost no improvement against nuisance orders. We found that hospitals that inappropriately alerted on 1 or more nuisance orders increased their overall score by over 3%. This suggests that some full demonstration hospitals achieved their scores by having too many alerts turned on, including inappropriate ones that may cause alert fatigue and clinician burnout. Overall, hospitals and vendors will likely need to work together to address this issue. The Leapfrog Group’s decision to incorporate nuisance orders into the overall score in the next cycle of the test aims to encourage hospitals to more appropriately address overalerting. Hospitals seeking to improve their medication related decision support should be cognizant of the potential trade-offs and may wish to first address potentially fatal orders while also seeking to limit unnecessary warnings.

FUNDING

This work was supported by the Agency for Healthcare Research and Quality grant number R01HS023696 (DWB).

AUTHOR CONTRIBUTIONS

All authors contributed to this study through either data collection, statistical analyses, drafting, and supervision, and met the criteria for authorship per International Committee of Medical Journal Editors guidelines.

SUPPLEMENTARY MATERIAL

Supplementary material is available at Journal of the American Medical Informatics Association online.

CONFLICT OF INTEREST STATEMENT

DWB consults for EarlySense, which makes patient safety monitoring systems; receives cash compensation from CDI (Negev), Ltd, which is a not-for-profit incubator for health information technology startups; receives equity from ValeraHealth, which makes software to help patients with chronic diseases, Clew, which makes software to support clinical decision making in intensive care, MDClone, which takes clinical data and produces deidentified versions of it, and AESOP, which makes software to reduce medication error rates; and will receive research funding from IBM Watson Health; and his financial interests have been reviewed by Brigham and Women’s Hospital and Partners HealthCare in accordance with their institutional policies. MD is an employee at the Leapfrog Group. All other authors have no competing interests to declare.

Supplementary Material

ocaa098_Supplementary_Data

References

  • 1. Adler-Milstein J, Jha AK.. HITECH Act drove large gains in hospital electronic health record adoption. Health Aff (Millwood) 2017; 36 (8): 1416–22. [DOI] [PubMed] [Google Scholar]
  • 2. Kuperman GJ, Bobb A, Payne TH, et al. Medication-related clinical decision support in computerized provider order entry systems: a review. J Am Med Inform Assoc 2007; 14 (1): 29–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Bates DW, Teich JM, Lee J, et al. The impact of computerized physician order entry on medication error prevention. J Am Med Inform Assoc 2007; 6 (4): 313–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Leung AA, Keohane C, Lipsitz S, et al. Relationship between medication event rates and the Leapfrog computerized physician order entry evaluation tool. J Am Med Informatics Assoc 2013; 20 (e1): e85–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Bates DW, Leape LL, Cullen DJ, et al. Effect of computerized physician order entry and a team intervention on prevention of serious medication errors. JAMA 1998; 280 (15): 1311–6.[CrossRef][10.1001/jama.280.15.1311] [DOI] [PubMed] [Google Scholar]
  • 6. Radley DC, Wasserman MR, Olsho LE, Shoemaker SJ, Spranca MD, Bradshaw B.. Reduction in medication errors in hospitals due to adoption of computerized provider order entry systems. J Am Med Inform Assoc 2013; 20 (3): 470–6.CrossRef][10.1136/amiajnl-2012-001241] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Koppel R, Metlay JP, Cohen A, et al. Role of computerized physician order entry systems in facilitating medication errors. JAMA 2005; 293 (10): 1197–203. [DOI] [PubMed] [Google Scholar]
  • 8. Chou D. Health IT and Patient Safety: Building Safer Systems for Better Care. JAMA. 2012; 308 (21): 2282. [Google Scholar]
  • 9. Schiff GD, Amato MG, Eguale T, et al. Computerised physician order entry-related medication errors: analysis of reported errors and vulnerability testing of current systems. BMJ Qual Saf 2015; 24 (4): 264–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Ash JS, Sittig DF, Poon EG, Guappone K, Campbell E, Dykstra RH.. The extent and importance of unintended consequences related to computerized provider order entry. J Am Med Inform Assoc 2007; 14 (4): 415–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Gregory ME, Russo E, Singh H.. Electronic health record alert-related workload as a predictor of burnout in primary care providers. Appl Clin Inform 2017; 8 (3): 686–97. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Wright A, Ai A, Ash J, et al. Clinical decision support alert malfunctions: analysis and empirically derived taxonomy. J Am Med Informatics Assoc 2018; 25 (5): 496–506. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Kilbridge PM, Welebob EM, Classen DC.. Development of the Leapfrog methodology for evaluating hospital implemented inpatient computerized physician order entry systems. Qual Saf Health Care 2006; 15 (2): 81–4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Classen D, Avery A, Bates D.. Evaluation and certification of computerized provider order entry systems. J Am Med Informatics Assoc 2007; 14 (1): 48–55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Metzger J, Welebob E, Bates DW, Lipsitz S, Classen DC.. Mixed results in the safety performance of computerized physician order entry. Health Aff (Millwood) 2010; 29 (4): 655–63. [DOI] [PubMed] [Google Scholar]
  • 16. Chaparro J, Classen D, Danforth M, Stockwell D, Longhurst C.. National trends in safety performance of electronic health record systems in children’s hospitals. J Am Med Informatics Assoc 2016; 24 (2): 268–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.The Leapfrog Group. Leapfrog Hospital Survey. Factsheet: Computerized Physician Order Entry. https://www.leapfroggroup.org/sites/default/files/Files/2020%20CPOE%20Fact%20Sheet.pdf Accessed June 5, 2019.
  • 18.The Leapfrog Group. The Leapfrog Hospital Survey Scoring Algorithms. https://www.leapfroggroup.org/sites/default/files/Files/2020ScoringAlgorithms_20200401_v8.1%20%28version%202%29.pdf Accessed June 26, 2019.
  • 19. Holmgren AJ, Co Z, Newmark L, Danforth M, Classen D, Bates D.. Assessing the safety of electronic health records: a national longitudinal study of medication-related decision support. BMJ Qual Saf 2020; 29 (1): 52–9. [DOI] [PubMed] [Google Scholar]
  • 20. Phansalkar S, van der Sijs H, Tucker AD, et al. Drug-drug interactions that should be non-interruptive in order to reduce alert fatigue in electronic health records. J Am Med Inform Assoc 2013; 20 (3): 489–93. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Adler-Milstein J, Holmgren AJ, Kralovec P, Worzala C, Searcy T, Patel V.. Electronic health record adoption in US hospitals: the emergence of a digital “advanced use” divide. J Am Med Informatics Assoc 2017; 24 (6): 1142–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Weingart SN, Toth M, Sands DZ, Aronson MD, Davis RB, Phillips RS.. Physicians’ decisions to override computerized drug alerts in primary care. Arch Intern Med 2003; 163 (21): 2625–31. [DOI] [PubMed] [Google Scholar]
  • 23. Ancker JS, Edwards A, Nosal S, Hauser D, Mauer E, Kaushal R; HITEC Investigators. Effects of workload, work complexity, and repeated alerts on alert fatigue in a clinical decision support system. BMC Med Inform Decis Mak 2017; 17 (1): 36. doi : 10.1186/s12911-017-0430-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Kesselheim AS, Cresswell K, Phansalkar S, Bates DW, Sheikh A.. Analysis & commentary: Clinical decision support systems could be modified to reduce “alert fatigue” while still minimizing the risk of litigation. Health Aff (Millwood) 2011; 30 (12): 2310–7. doi : 10.1377/hlthaff.2010.111124 [DOI] [PubMed] [Google Scholar]
  • 25. Keller JP. Clinical alarm hazards: a top ten health technology safety concern. J Electrocardiol 2012; 45 (6): 588–91. doi : 10.1016/j.jelectrocard.2012.08.050 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocaa098_Supplementary_Data

Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES