Abstract
The quality of implementation science and the efficiency of implementation practice are undermined by several measurement issues. First, few measures have evidence of fundamental psychometric properties, making it difficult to have confidence in study findings. Second, to accumulate knowledge, we need measures that can be used across studies; yet, the majority of measures are setting-, population-, or intervention-specific and are used only once. Third, there are no minimal reporting standards for measures, which contribute to the lack of psychometric evidence. Finally, only recently have measure developers begun to address pragmatic qualities of measures. This chapter will address each issue and make recommendations for how to address them in the service of advancing implementation science.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Ann Int Med. 2010;152(11):726–32. PMID: 20335313; BMC Medicine. 2010;8:18. PMID: 20334633; BMJ. 2010;340:c332. PMID: 20332509; J Clin Epidemiol. 2010;63(8): 834–40. PMID: 20346629; Lancet. 2010;375(9721):1136 supplementary webappendix; Obstet Gynecol. 2010;115(5):1063–70. PMID: 20410783; Open Med. 2010;4(1):60–68.; PLoS Med. 2010;7(3): e1000251. PMID: 20352064; Trials. 2010;11:32. PMID: 20334632
References
Aarons, G. A. (2004). Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS). Mental Health Services Research, 6(2), 61–74.
Aarons, G. A., Ehrhart, M. G., & Farahnak, L. R. (2014). The implementation leadership scale (ILS): Development of a brief measure of unit level implementation leadership. Implementation Science, 9(1), 1–10.
Aarons, G. A., Green, A. E., Palinkas, L. A., Self-Brown, S., Whitaker, D. J., Lutzker, J. R., … Chaffin, M. J. (2012). Dynamic adaptation process to implement an evidence-based child maltreatment intervention. Implementation Science, 7(1), 1–9.
Ajzen, I. (1991). The theory of planned behavior. Organizational Behavior and Human Decision Processes, 50(2), 179–211.
Beidas, R. S., Marcus, S., Aarons, G. A., Hoagwood, K. E., Schoenwald, S., Evans, A. C., … Mandell, D. S. (2015). Individual and organizational factors related to community clinicians’ use of therapy techniques in a large public mental health system. JAMA Pediatrics, 169(4), 374–382.
Bellg, A. J., Borrelli, B., Resnick, B., Hecht, J., Minicucci, D. S., Ory, M., … Czajkowski, S. (2004). Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH Behavior Change Consortium. Health Psychology, 23(5), 443–451.
Bossuyt, P. M., Reitsma, J. B., Bruns, D. E., Gatsonis, C. A., Glasziou, P. P., Irwig, L., et al. (2015). STARD 2015: An updated list of essential items for reporting diagnostic accuracy studies. Radiology, 277(3), 826–832.
Boyd, M. R., Powell, B. J., Endicott, D., & Lewis, C. C. (2017). A method for tracking implementation strategies: an exemplar implementing measurementbased care in community behavioral health clinics. Behavior Therapy, Advance online publication. https://doi.org/10.1016/j.beth.2017.11.012.
Carroll, C., Patterson, M., Wood, S., Booth, A., Rick, J., & Balain, S. (2007). A conceptual framework for implementation fidelity. Implementation Science, 2(1), 1–9.
Casper, E. S. (2007). The theory of planned behavior applied to continuing education for mental health professionals. Psychiatric Services, 58(10), 1324–1329.
Chan, A.-W., Tetzlaff, J. M., Altman, D. G., Laupacis, A., Gøtzsche, P. C., Krleža-Jerić, K., et al. (2013). SPIRIT 2013 statement: Defining standard protocol items for clinical trials. Annals of Internal Medicine, 158(3), 200–207.
Chaudoir, S. R., Dugan, A. G., & Barr, C. H. (2013). Measuring factors affecting implementation of health innovations: A systematic review of structural, organizational, provider, patient, and innovation level measures. Implementation Science, 8(1), 1–20.
Chor, K. H. B., Wisdom, J. P., Olin, S.-C. S., Hoagwood, K. E., & Horwitz, S. M. (2015). Measures for predictors of innovation adoption. Administration and Policy in Mental Health and Mental Health Services Research, 42(5), 545–573.
Cobo, E., Cortés, J., Ribera, J. M., Cardellach, F., Selva-O’Callaghan, A., Kostov, B., et al. (2011). Effect of using reporting guidelines during peer review on quality of final manuscripts submitted to a biomedical journal: Masked randomised trial. BMJ, d6783, 343.
Damschroder, L. J., Aron, D. C., Keith, R. E., Kirsh, S. R., Alexander, J. A., & Lowery, J. C. (2009). Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science, 4(1), 1–15.
Durlak, J. A., & DuPre, E. P. (2008). Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology, 41(3–4), 327–350.
Ehrhart, M. G., Aarons, G. A., & Farahnak, L. R. (2014). Assessing the organizational context for EBP implementation: The development and validity testing of the Implementation Climate Scale (ICS). Implementation Science, 9(1), 1–11.
Emmons, K. M., Weiner, B., Fernandez, M. E., & Tu, S.-P. (2012). Systems antecedents for dissemination and implementation: A review and analysis of measures. Health Education & Behavior: The Official Publication of the Society for Public Health Education, 39(1), 87–105.
Farr, J. L., & West, M. A. (1990). Innovation and creativity at work: Psychological and organizational strategies. Chichester, Wiley.
Francis, J. J., Eccles, M. P., Johnston, M., Walker, A., Grimshaw, J., Foy, R., … Bonetti, D. (2004). Constructing questionnaires based on the theory of planned behaviour. A Manual for Health Services Researchers, 2010, 2–12.
Gagnier, J. J., Kienle, G., Altman, D. G., Moher, D., Sox, H., & Riley, D. (2013). The CARE guidelines: Consensus-based clinical case reporting guideline development. Journal of Medical Case Reports, 7(1), 1–6.
Gerring, J. (2001). Social science methodology: A criterial framework. Cambridge, Cambridge University Press.
Glasgow, R. E., & Riley, W. T. (2013). Pragmatic measures: What they are and why we need them. American Journal of Preventive Medicine, 45(2), 237–243.
Glasgow, R. E., Vinson, C., Chambers, D., Khoury, M. J., Kaplan, R. M., & Hunter, C. (2012). National Institutes of Health approaches to dissemination and implementation science: Current and future directions. American Journal of Public Health, 102(7), 1274–1281.
House, R. J., Hanges, P. J., Ruiz-Quintanilla, S. A., Dorfman, P. W., Javidan, M., Dickson, M., et al. (1999). Cultural influences on leadership and organizations: Project GLOBE. Advances in Global Leadership, 1(2), 171–233.
Hrisos, S., Eccles, M. P., Francis, J. J., Dickinson, H. O., Kaner, E. F. S., Beyer, F., & Johnston, M. (2009). Are there valid proxy measures of clinical behaviour? A systematic review. Implementation Science, 4(1), 1–20.
Husereau, D., Drummond, M., Petrou, S., Carswell, C., Moher, D., Greenberg, D., … Loder, E. (2013). Consolidated health economic evaluation reporting standards (CHEERS) statement. Cost Effectiveness and Resource Allocation, 11(1), 1–6.
Ibrahim, S., & Sidani, S. (2015). Fidelity of intervention implementation: A review of instruments. Health, 7(12), 1687–1695.
Jacobs, S. R., Weiner, B. J., Reeve, B. B., Hofmann, D. A., Christian, M., & Weinberger, M. (2015). Determining the predictors of innovation implementation in healthcare: A quantitative analysis of implementation effectiveness. BMC Health Services Research, 15(1), 1–13.
Johnson, K., Collins, D., & Wandersman, A. (2013). Sustaining innovations in community prevention systems: A data-informed sustainability strategy. Journal of Community Psychology, 41(3), 322–340.
Kilkenny, C., Browne, W. J., Cuthill, I. C., Emerson, M., & Altman, D. G. (2010). Improving bioscience research reporting: The ARRIVE guidelines for reporting animal research. PLoS Biology, 8(6), e1000412.
Kimberlin, C. L., & Winterstein, A. G. (2008). Validity and reliability of measurement instruments used in research. American Journal of Health-System Pharmacy, 65(23), 2276–2284.
Krause, J., Van Lieshout, J., Klomp, R., Huntink, E., Aakhus, E., Flottorp, S., et al. (2014). Identifying determinants of care for tailoring implementation in chronic diseases: An evaluation of different methods. Implementation Science, 9(1), 1–12.
Lehman, W. E. K., Greener, J. M., & Simpson, D. D. (2002). Assessing organizational readiness for change. Journal of Substance Abuse Treatment, 22(4), 197–209.
Lewis, C., Darnell, D., Kerns, S., Monroe-DeVita, M., Landes, S. J., Lyon, A. R., et al. (2016). Proceedings of the 3rd Biennial Conference of the Society for Implementation Research Collaboration (SIRC) 2015: Advancing efficient methodologies through community partnerships and team science. Implementation Science, 11(1), 1–38.
Lewis, C. C., Fischer, S., Weiner, B. J., Stanick, C., Kim, M., & Martinez, R. G. (2015). Outcomes for implementation science: An enhanced systematic review of instruments using evidence-based rating criteria. Implementation Science, 10, 1–17.
Lewis, C. C., Scott, K., Marti, C. N., Marriott, B. R., Kroenke, K., Putz, J. W., … Rutkowski, D. (2015). Implementing measurement-based care (iMBC) for depression in community mental health: A dynamic cluster randomized trial study protocol. Implementation Science, 10(1), 1–14.
Lewis, C. C., Stanick, C. F., Martinez, R. G., Weiner, B. J., Kim, M., Barwick, M., & Comtois, K. A. (2015). The society for implementation research collaboration instrument review project: A methodology to promote rigorous evaluation. Implementation Science, 10(1), 1–10.
Lewis, C. C., Weiner, B. J., Stanick, C., & Fischer, S. M. (2015). Advancing implementation science through measure development and evaluation: A study protocol. Implementation Science, 10(1), 1–18.
Martinez, R. G., Lewis, C. C., & Weiner, B. J. (2014). Instrumentation issues in implementation science. Implementation Science, 9(1), 1–9.
Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. Annals of Internal Medicine, 151(4), 264–269.
Mokkink, L. B., Terwee, C. B., Patrick, D. L., Alonso, J., Stratford, P. W., Knol, D. L., … De Vet, H. C. (2010). The COSMIN checklist for assessing the methodological quality of studies on measurement properties of health status measurement instruments: An international Delphi study. Quality of Life Research, 19(4), 539–549.
NQF: Prioritizing Measures. (n.d.). Retrieved 30 June 2016, from http://www.qualityforum.org/prioritizing_measures/
O’Brien, B. C., Harris, I. B., Beckman, T. J., Reed, D. A., & Cook, D. A. (2014). Standards for reporting qualitative research: A synthesis of recommendations. Academic Medicine, 89(9), 1245–1251.
Ogrinc, G., Davies, L., Goodman, D., Batalden, P., Davidoff, F., & Stevens, D. (2015). SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): Revised publication guidelines from a detailed consensus process. The Journal of Continuing Education in Nursing, 46(11), 501–507.
Palinkas, L. A., Aarons, G. A., Horwitz, S., Chamberlain, P., Hurlburt, M., & Landsverk, J. (2010). Mixed method designs in implementation research. Administration and Policy in Mental Health and Mental Health Services Research, 38(1), 44–53.
PAR-13-055: Dissemination and Implementation Research in Health (R01). (n.d.). Retrieved 30 June 2016, from http://grants.nih.gov/grants/guide/pa-files/PAR-13-055.html
Pinnock, H., Epiphaniou, E., Sheikh, A., Griffiths, C., Eldridge, S., Craig, P., & Taylor, S. J. C. (2015). Developing standards for reporting implementation studies of complex interventions (StaRI): A systematic review and e-Delphi. Implementation Science: IS, 10(1), 1–10.
Proctor, E. K., Landsverk, J., Aarons, G., Chambers, D., Glisson, C., & Mittman, B. (2009). Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research, 36(1), 24–34.
Rabin, B. A., Lewis, C. C., Norton, W. E., Neta, G., Chambers, D., Tobin, J. N., … Glasgow, R. E. (2016). Measurement resources for dissemination and implementation research in health. Implementation Science, 11(1), 1–9.
Rabin, B. A., Purcell, P., Naveed, S., Moser, R. P., Henton, M. D., Proctor, E. K., … Glasgow, R. E. (2012). Advancing the application, quality and harmonization of implementation science measures. Implementation Science, 7(1), 1–11.
Reliability and Validity. (n.d.). Retrieved June 30, 2016, from https://www.uni.edu/chfasoa/reliabilityandvalidity.htm
Robins, C. S., Ware, N. C., Willging, C. E., Chung, J. Y., Roberto Lewis-Fernández, M. D., et al. (2008). Dialogues on mixed-methods and mental health services research: Anticipating challenges, building solutions. Psychiatric Services, 59(7), 727–731.
Schulz, K. F., Altman, D. G., & Moher, D. (2010). CONSORT 2010 statement: Updated guidelines for reporting parallel group randomised trials. BMC Medicine, 8(1), 1–9.
Siegel, S. (1964). Decision and choice: Contributions of Sidney Siegel. New York, NY: McGraw-Hill.
Stein, K. F., Sargent, J. T., & Rafaels, N. (2007). Intervention research: Establishing fidelity of the independent variable in nursing clinical trials. Nursing Research, 56(1), 54–62.
Streiner, D. L., & Norman, G. R. (2008). Health measurement scales: A practical guide to their development and use (4th ed.). New York, NY: Oxford University Press.
Tabak, R. G., Khoong, E. C., Chambers, D. A., & Brownson, R. C. (2012). Bridging research and practice: Models for dissemination and implementation research. American Journal of Preventive Medicine, 43(3), 337–350.
Teddlie, C., & Tashakkori, A. (2003). Major issues and controversies in the use of mixed methods in the social and behavioral sciences. Handbook of Mixed Methods in Social & Behavioral Research, (1st ed.). Thousand Oaks, Sage, 3–50.
The EQUATOR Network | Enhancing the QUAlity and Transparency Of Health Research. (n.d.). Retrieved from http://www.equator-network.org/
Turner, L., Shamseer, L., Altman, D. G., Schulz, K. F., & Moher, D. (2012). Does use of the CONSORT statement impact the completeness of reporting of randomised controlled trials published in medical journals? A Cochrane review. Systematic Reviews, 1(1), 1–7.
Valente, T. W. (1996). Social network thresholds in the diffusion of innovations. Social Networks, 18(1), 69–89.
Von Elm, E., Altman, D. G., Egger, M., Pocock, S. J., Gøtzsche, P. C., Vandenbroucke, J. P., et al. (2007). The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: Guidelines for reporting observational studies. Preventive Medicine, 45(4), 247–251.
Weiner, B., Amick, H., & Lee, S.-Y. (2008). Conceptualization and measurement of organizational readiness for change: A review of the literature in health services research and other fields. Medical Care Research and Review., 65(4), 379–439.
Wisdom, J. P., Chor, K. H. B., Hoagwood, K. E., & Horwitz, S. M. (2014). Innovation adoption: A review of theories and constructs. Administration and Policy in Mental Health and Mental Health Services Research, 41(4), 480–502.
Zamanzadeh, V., Ghahramanian, A., Rassouli, M., Abbaszadeh, A., Alavi-Majd, H., & Nikanfar, A.-R. (2015). Design and Implementation Content Validity Study: Development of an instrument for measuring Patient-Centered Communication. Journal of Caring Sciences, 4(2), 165–178.
Zuckerman, H. S., Shortell, S. M., Morrison, E. M., & Friedman, B. (1990). Strategic choices for America’s Hospitals. JSTOR.
Zumbo, B. D., & Chan, E. K. (2014). Validity and validation in social, behavioral, and health sciences. Cham, Springer.
Acknowledgments
Research reported in this publication was supported by the National Institute of Mental Health under Award Number R01MH106510-01 granted to PI: CC Lewis.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Glossary
- Absorptive Capacity (pg. 232)
-
An organization’s ability to access and effectively use information (Emmons et al., 2012)
- Adoption (pg. 233)
-
The complete or partial decision to proceed with the implementation of an innovation as a distinct process preceding but separate from actual implementation (Wisdom et al., 2014)
- Barrier (pg. 237)
-
Factor that obstructs changes in targeted professional behaviors or healthcare delivery processes, such as the cost and complexity of the intervention (Krause et al., 2014)
- Collaboration (pg. 230)
-
One of five core tenets of dissemination and implementation (D&I) research proposed by Glasgow et al. (2012), defined as the use of interdisciplinary research teams and research-practice collaborations (Glasgow et al., 2012)
- Concurrent Validity (pg. 231)
-
The degree to which an instrument distinguishes groups it should theoretically distinguish. Concurrent validity is not demonstrated if there is no reasonable hypothesized difference among groups on the instrument (Weiner et al., 2008)
- Construct Validity (pg. 232)
-
The degree to which inferences can legitimately be made from an instrument to the theoretical construct that it purportedly measures (Weiner et al., 2008)
- Content Validity (pg. 231)
-
The ability of the selected items to reflect the variables of the construct in the measure (Zamanzadeh et al., 2015)
- Convergent Validity (pg. 231)
-
The degree to which an instrument performs in a similar manner to other instruments that purportedly measure the same construct (e.g., two measures show a strong positive correlation). Convergent validity is most often assessed through confirmatory factor analysis (Weiner et al., 2008)
- Criterion Validity (pg. 232)
-
An empirical check on the performance of an instrument against some criteria (Weiner et al., 2008)
- Cumulative Knowledge (pg. 230)
-
One of five core tenets of dissemination & implementation (D&I) research proposed by Glasgow et al. (2012), defined as the need to create resources and funds of accessible knowledge about dissemination & implementation (D&I) research and its findings (Glasgow et al., 2012)
- Discriminant Validity (pg. 231)
-
The degree to which an instrument performs in a different manner to other instruments that purportedly measure different constructs. Discriminant validity is most often assessed through confirmatory factor analysis (Weiner et al., 2008)
- Efficiency (pg. 230)
-
One of five core tenets of dissemination & implementation (D&I) research proposed by Glasgow et al. (2012), defined as the use of efficient methods to study dissemination & implementation (D&I) (Glasgow et al., 2012)
- Facilitator (pg. 237)
-
Factor that enables changes in targeted professional behaviors or healthcare delivery processes, such as the cost and complexity of the intervention (Krause et al., 2014)
- Fidelity (pg. 228)
-
The competent and reliable delivery of an intervention as intended in the original design (Ibrahim & Sidani, 2015)
- Homonymy (pg. 242)
-
Models that define the same construct in different ways (i.e., two discrepant definitions for the same term) (Martinez et al., 2014)
- Implementation Outcomes (pg. 232)
-
Effects of deliberate and purposive actions to implement new treatments, practices, and services (Proctor et al., 2009)
- Improved Capacity (pg. 230)
-
One of five core tenets of dissemination & implementation (D&I) research proposed by Glasgow et al. (2012), defined as a necessary increase in the capacity to train future dissemination & implementation (D&I) researchers and share advances made through implementation science with all stakeholders (Glasgow et al., 2012)
- Intermediary (pg. 229)
-
Known as trainers, internal and external facilitators, implementation practitioners and purveyors. Intermediaries provide training and consultation and otherwise assist community settings to implement evidence-based practices (Lewis et al., 2016)
- Internal Consistency (pg. 233)
-
Refers to whether several items that propose to measure the same general construct produce similar scores
- Inter-rater Reliability (pg. 231)
-
A measure of reliability used to assess the degree to which different judges or raters agree in their assessment decisions (Reliability and Validity n.d.)
- Leadership (pg. 229)
-
The ability of an individual to influence, motivate, and enable others to contribute toward the effectiveness and success of organizations of which they are members (House et al., 1999)
- Managerial Relations (pg. 232)
-
Alliances between groups within an organization to promote change (Zuckerman et al., 1990)
- Mixed Methods (pg. 228)
-
Focus on collecting, analyzing and merging both quantitative and qualitative data into one or more studies. The central premise of these designs is that the use of quantitative and qualitative approaches in combination provides a better understanding of research issues than either approach alone (Robins et al., 2008)
- Norms (pg. 228)
-
The sample size, and the mean (M) and standard deviation (SD) for the instrument results (C. C. Lewis, Fischer, et al., 2015)
- Organizational Climate (pg. 232)
-
Organizational members’ perceptions of their work environment (Emmons et al., 2012)
- Organizational Readiness for Change (pg. 231)
-
The extent to which organizational members are psychologically and behaviorally prepared to implement organizational change (Weiner et al., 2008)
- Penetration (pg. 228)
-
The integration of a practice within a service setting and its subsystems (Proctor et al., 2009)
- Pragmatic Measure (pg. 228)
-
One that has relevance to stakeholders and is feasible to use in most real-world settings to assess progress (Glasgow & Riley, 2013)
- Predictive Validity (pg. 232)
-
The degree to which the instrument can predict or correlate with an outcome of interest measured at some time in the future (Lewis, Fischer, et al., 2015)
- Predictor (pg. 233)
-
Variable that may anticipate implementation effectiveness (Jacobs et al., 2015)
- Proxy Measure (pg. 231)
-
Indirect measures of clinical practice, such as a review of medical records or interviewing the clinician (Hrisos et al., 2009)
- Qualitative Methods (pg. 228)
-
Used to explore and obtain depth of understanding as to the reasons for success or failure to implement evidence-based practice or to identify strategies for facilitating implementation (Teddlie & Tashakkori, 2003)
- Quantitative Methods (pg. 228)
-
Uused to test and confirm hypotheses based on an existing conceptual model and obtain breadth of understanding of predictors of successful implementation (Teddlie & Tashakkori, 2003)
- Reliability (pg. 231)
-
The extent to which a measure (or its set of items) produces the same results (on repeated measures)
- Responsiveness (pg. 233)
-
The ability of an instrument to detect clinically important changes in the construct it measures over time (Lewis, Fischer, et al., 2015)
- Rigor and Relevance (pg. 230)
-
One of five core tenets of dissemination & implementation (D&I) research proposed by Glasgow et al. (2012), defined as the use of rigorous research methods that address critical questions in relevant contexts (Glasgow et al., 2012)
- Social Network (pg. 233)
-
The pattern of relations and interactions that exist among people, organizations, communities, or other social systems (Valente, 1996)
- Stakeholder (pg. 229)
-
An individual, group, or organization who may affect, be affected by, or perceive itself to be affected by a decision, activity, or outcome of a project, program, or portfolio
- Structural Validity (pg. 233)
-
The degree to which all the test items rise and fall together; or to which, by contrast, perhaps, one set of test items rise and fall together in one pattern, and another group of test items rises and falls in a different pattern (Lewis, Fischer, et al., 2015)
- Synonymy (pg. 242)
-
Models that define different constructs the same way (i.e., two unique terms assigned the same definition) (Martinez et al., 2014)
- Test-retest Reliability (pg. 233)
-
A measure of reliability obtained by administering the same test twice over a period of time to a group of individuals. The scores from Time 1 and Time 2 can then be correlated in order to evaluate the test for stability over time (“Reliability and Validity,” n.d.)
- Theory of Planned Behavior (pg. 237)
-
States that people’s behavior is determined by their intention to perform a given behavior (Casper, 2007)
- Translation Validity (pg. 232)
-
The degree to which an instrument accurately translates (or carries) the meaning of the construct (Weiner et al., 2008)
- Usability (pg. 233)
-
The ease of administration, which is calculated by the total number of items on the measure being rated (Lewis, Fischer, et al., 2015)
- Validity (pg. 231)
-
The quality of the inferences, claims or decisions drawn from the scores of an instrument
- Vision (pg. 232)
-
An idea of a valued outcome which represents a higher order goal and a motivating force at work (Farr & West, 1990)
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Lewis, C.C., Dorsey, C. (2020). Advancing Implementation Science Measurement. In: Albers, B., Shlonsky, A., Mildon, R. (eds) Implementation Science 3.0. Springer, Cham. https://doi.org/10.1007/978-3-030-03874-8_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-03874-8_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-03873-1
Online ISBN: 978-3-030-03874-8
eBook Packages: Social SciencesSocial Sciences (R0)