Skip to main content
  • Research article
  • Open access
  • Published:

Patterns and correlates of mis-implementation in state chronic disease public health practice in the United States

Abstract

Background

Much of the disease burden in the United States is preventable through application of existing knowledge. State-level public health practitioners are in ideal positions to affect programs and policies related to chronic disease, but the extent to which mis-implementation occurring with these programs is largely unknown. Mis-implementation refers to ending effective programs and policies prematurely or continuing ineffective ones.

Methods

A 2018 comprehensive survey assessing the extent of mis-implementation and multi-level influences on mis-implementation was reported by state health departments (SHDs). Questions were developed from previous literature. Surveys were emailed to randomly selected SHD employees across the Unites States. Spearman’s correlation and multinomial logistic regression were used to assess factors in mis-implementation.

Results

Half (50.7%) of respondents were chronic disease program managers or unit directors. Forty nine percent reported that programs their SHD oversees sometimes, often or always continued ineffective programs. Over 50% also reported that their SHD sometimes or often ended effective programs. The data suggest the strongest correlates and predictors of mis-implementation were at the organizational level. For example, the number of organizational layers impeded decision-making was significant for both continuing ineffective programs (OR=4.70; 95% CI=2.20, 10.04) and ending effective programs (OR=3.23; 95% CI=1.61, 7.40).

Conclusion

The data suggest that changing certain agency practices may help in minimizing the occurrence of mis-implementation. Further research should focus on adding context to these issues and helping agencies engage in appropriate decision-making. Greater attention to mis-implementation should lead to greater use of effective interventions and more efficient expenditure of resources, ultimately to improve health outcomes.

Peer Review reports

Background

Recently, there has been an increasing emphasis in public health practice on use of evidence-based decision-making (EBDM) in chronic disease prevention and control [1, 2]. The need to protect and improve the health and well-being of an entire population, rather than the individual, is the basis of public health programs that are commonly delivered through state level agencies and their local partners in the United States. While the field has made strides in the expected use of evidence-based practices and programs, there is still a gap in the way decision-making practices occur related to ending non-evidence based programs and continuing evidence-based programs (EBPs) [3, 4]. Since much of chronic disease burden is preventable [5,6,7], gaps in delivery of EBPs hinders effective public health practice to improve health.

The mechanisms behind mis-implementation are an important area of inquiry for public health practitioners and researchers [8,9,10]. The term mis-implementation refers to the inappropriate termination of evidence-based programs or the inappropriate continuation of non-evidence based programs [8]. An example of inappropriate termination of an evidence-based policy in the United States is notable with the rollback of Bush and Obama era healthy school lunch standards [11], which were relaxed despite evidence they increased school-aged children’s consumption of healthy foods [12]. Alternately, an example of inappropriate continuation of non-evidence-based programs is the continued use of health fairs for community screenings, interventions and education. While they may help increase visibility of services to subsets of the community, there is limited evidence that they increase screening follow-up, enhance sustained health knowledge, or improve health outcomes [13, 14]. Previous work in this area suggests that between 58 and 62% of public health programs are evidence-based [15, 16]. However, only 37% of chronic disease prevention staff in state health departments reported programs are often or always discontinued when they should continue [10]. These studies set a baseline for mis-implementation context but did not further explore the contributing factors to these decision-making processes and did not assess the degree to which mis-implementation is occurring in chronic disease public health practices.

Exploring the evidence-based decision-making (EBDM) and related literature suggests that a mix of individual, organizational, agency and policy related factors are at play in organizational decision-making, including whether or not to begin or continue implementing programs, and program outcomes [1, 10]. EBDM which is an approach to decision-making that combines the appropriate research evidence, practitioner expertise, and the characteristics, needs, and preferences of the community, can have a significant impact on health-related outcomes [1, 17]. Specifically, leadership support in applying EBDM frameworks can enhance an organization’s capacity for improved public health practices [1, 18, 19]. Concurrently, contextual factors, cost burden and characteristics of early adopters are also factors in mis- implementation outcomes [16, 20]. These concepts support the factors that inform our original mis- implementation framework [8]. In a cross-sectional U.S. study of local health departments, higher perceived organizational supports for EBDM were associated with lower perceived frequency of inappropriate continuation [21]. In cross-country comparisons of mis-implementation involving Australia, Brazil, China, and the United States, leadership support and political contexts were common factors in whether chronic disease programs continued or ended inappropriately across four countries [9].

State health departments (SHDs) are a significant driver of public health programs within the United States. Most federal funds for chronic disease prevention are directed through state health departments, and they provide resources and guidance to local level implementation of public health programs [22]. This dynamic of the SHD as the pass-through organization means that their organizational dynamics are key in the successful outcomes of these programs. A common delivery structure in the U.S. is for local public health agencies and community-based organizations to design how they will implement topic-specific programs as they respond to SHD requests for proposals. Another delivery structure is contractual relationships generated by the SHDs in which local agencies choose from a menu of programmatic approaches to chronic disease prevention provided by the SHD. And while an estimated $1.1 billion dollars flow through state public health chronic disease and cancer prevention programs annually, a majority of these funds focus on secondary prevention (e.g., cancer screening), leaving a scarcity for primary prevention resources [23, 24]. With this scarcity in prevention funding, it is essential that every dollar being directed towards chronic disease programs have maximum impact.

This study seeks to: 1) assess the extent to which mis-implementation of chronic disease programs is occurring in state health departments, and 2) identify the most important factors associated with mis-implementation among programs overseen by SHDs [8, 25].

Methods

This study is a cross-sectional assessment of decision-making practices within state health departments. We surveyed current SHD employees across the U.S. to gather quantitative data to identify the perceived frequency and correlates of mis-implementation within SHD chronic disease units. Human subjects approval was obtained from the Washington University in St. Louis Institutional Review Board (#201812062).

Survey development

To develop a survey informed by the literature and addressing knowledge and survey gaps, we undertook an extensive literature review. Survey development was also guided by the study team’s previously described conceptual framework to ensure measures included EBDM skills, organizational capacity for EBDM, and external influences such as funding and policy climate [8].

A literature review of several databases (i.e., PubMed, SCOPUS, Web of Science) was conducted to search for existing survey instruments regarding organizational decision-making. Identified measures were summarized according to setting, audience, psychometric properties, and survey question themes. From our review of 63 surveys, we ended up selecting items from 23 measures to examine in relation to our conceptual framework [8,9,10, 18, 26,27,28,29,30,31,32,33,34,35,36,37,38,39,40]. Questions pertaining to political influence and mis-implementation decision-making were iteratively developed and refined as there was little published literature available at the time to inform these questions. Drafts for questions in each domain (individual skills, organizational/agency capacity, mis-implementation decision-making, external influences) were updated, and underwent three separate reviews by the study team and a group of practitioner experts to develop a final draft of the study instrument. Since the survey had not been previous validated, the final draft survey underwent cognitive response testing with 11 former SHD chronic disease directors. Reliability test-retest of the revised draft with 39 current SHD chronic disease unit staff found consistency in scores and only minor changes to the survey were needed.

Measures

Survey measures addressed the following topics: participant demographic characteristics, EBDM skills, perceived frequency of mis-implementation, reasons for mis-implementation, perceived organizational supports for EBDM, and external influences. External influences included perceived governor office and state legislative support for evidence-based interventions (EBIs), and perceived importance of multi-sector partnering. Exact item wording is provided in the national survey located in Additional file 1. Survey questions for EBDM skills, organizational supports, and external influences consisted of 5-point Likert scale responses. Response options ranged from either “Strongly Disagree to Strongly Agree” or “Not at all” to “Very great extent”.

Perceived frequency of mis-implementation was assessed with two questions: “How often do effective programs, overseen by your work unit, end when they should have continued”; and “How often do ineffective programs, overseen by your work unit, continue when they should have ended.” The response options were: never, rarely, sometimes, often, and always. These variables will subsequently be referred to as inappropriate termination and inappropriate continuation, respectively.

Participants

Participants for the survey were recruited from the National Association of Chronic Disease Directors (NACDD) membership list. The NACDD membership lists consists of SHD employees working in their respective chronic disease units. Participants were randomly selected from the membership roll after individuals from territories and non-qualifying positions (administrative support & financial personnel) were removed. Emails were sent out in June 2018 inviting a randomly selected sample of 1239 members to participate in a Qualtrics online survey. Participants were offered the option of a $20 Amazon gift card or to have us make a donation to a public health charity of their choosing. A follow-up email was sent two weeks after the initial email with a reminder phone call a week later. Non-respondents could have received up to three reminder emails and two reminder voicemails or a single phone conversation to address questions. There was no ability to directly compare non-respondents with respondents given the lack of key characteristics (e.g., role in the agency, years working in the agency) in our initial list for sampling. The online survey closed at the end of August 2018.

Data cleaning and analysis

Respondents who answered any of the questions beyond demographic questions were included in the sample. State-level variables, such as population size, funding from the Centers for Disease Control and Prevention (CDC) (the major funding source for state chronic disease control), and state governance type, were added to the data set from other publically available datasets such as the CDC grant funding profile, Association of State and Territorial Health Officials (ASTHO) State ProfilesFootnote 1 and Public Health Accreditation Board data [23, 25, 41, 42]. Dichotomized versions of Likert scale variables were created given the limited distribution of responses across the original scale and to facilitate interpretation. Responses that included Agree or Strongly Agree were coded as 1 while all other remaining responses were coded as 0.

Descriptive statistics were calculated for all variables in SPSS version 26. To assess associations, a Spearman’s correlation was calculated between each non-dichotomized mis-implementation variables and the individual demographic characteristics, individual skills, organizational capacity for EBIs and external factors. Multinomial logistic regression was then used to assess how variables were predictive of mis-implementation outcomes. The dependent variables (inappropriate termination & inappropriate continuation) were re-categorized to 1) often/always 2) sometimes and 3) never/rarely (reference category). Multinomial regression was used as the assumption of proportional odds was violated with an ordinal regression. The independent variables were dichotomized (as described above). Two separate models were fit: the first assessing inappropriate termination among programs overseen by SHDs and the second assessing inappropriate continuation among programs overseen by SHDs. We decided two separate models were appropriate as inappropriate termination and inappropriate continuation are two different phenomena within the overall mis-implementation concept. An initial model for each of the two dependent variables was run for each domain with all their respective variables included. All variables shown to be significant in these first runs of the model were then added to a final version of each model (inappropriate termination and inappropriate continuation).

Results

Demographic characteristics

The final response rate was 48.3% (n=643). There were respondents from every state, but the number of responses per state was not proportional to state population size. In the interest of confidentiality, responses were grouped by ASTHO defined regions [41], and there was a relatively even distribution of participants across regions (Table 1). Half (50.7%) of the respondents were chronic disease program managers and on average had been in their position for over six years. Most respondents worked across multiple health areas with cancer as the most represented program area. Thirty-five percent of respondents had a master’s or higher degree related to public health.

Table 1 Participant characteristics of U.S. SHD employees in chronic disease prevention units, 2018 survey (N=643)

Mis-implementation patterns

When asked “How often do effective programs, overseen by your work unit, end when they should have continued,” 50.7% of respondents indicated sometimes, often or always (Table 2). Respondents were asked to choose the top three reasons for effective programs ending (but not in a ranked order). The most common responses were: funding priorities changed/funding ended (87.6%); support from leaders in your agency changed (38.9%); support from policy makers changed (34.2%) and program was not sustainable (30.2%) (Table 2).

Table 2 Reported frequency and reasons for mis-implementation from a survey of U.S., 2018

Regarding inappropriate continuation, when asked “How often do ineffective programs, overseen by your work unit, continue when they should have ended,” 48.5% of respondents indicated sometimes, often or always. Respondents were also asked to choose the top three common reasons for ineffective programs continuing (not in ranked order). The most commons responses were: funder priorities to maintain program (43.4%); policy makers’ request or requirements to continue (42.9%); agency leadership requests to continue (37.9%); and standard is to maintain status quo (36.5%) (Table 2).

Mis-implementation correlates

The number of years a participant had been working in their current position (r= − 0.11), years they had been working at their agency (r= − 0.09) and years they had been working in public health (r= − 0.10) were shown to have small negative significant correlations with inappropriate continuation (Table 3), meaning more years of experience were associated with lower likelihood of inappropriate continuation. None of the individual skills were shown to have a statistically significant association with either inappropriate termination or inappropriate continuation. All of the organizational capacity variables were shown to have a small negative significant association with both mis-implementation variables, meaning higher perceived organizational capacity was associated with lower perceived frequency of mis-implementation (Table 3). External variables related to lawmakers’ priorities and support were shown to have small negative significant, associations with both the inappropriate termination and inappropriate continuation variable.

Table 3 Practitioner, Organization, and External Correlates of Mis-Implementation in U.S. SHD chronic disease units, 2018 (N=613)

In the final model for inappropriate termination (Table 4) the largest effects were shown for having individual skills to modify EBIs from one priority population to another (OR=3.24; 95% CI=1.19, 8.85); reporting that the number of layers of authority impedes decision-making (OR=3.23; 95% CI=1.61, 7.40); and leadership preserves through ups and downs of implementing EBIs (OR=0.16; 95% CI=0.07, 0.34). In the final model for inappropriate continuation, the largest effects were shown for reporting that the number of layers of authority impedes decision-making (OR=4.70; 95% CI=2.20, 10.04); use of economic evaluation in decision-making (OR=0.35; 95% CI=0.17, 0.73); and leadership competence in managing change (OR=0.26; 95% CI=0.13, 0.53).

Table 4 Mis-Implementation Predictors among programs overseen by U.S. state health department chronic disease unit staff, 2018 (N=613)

Discussion

A set of organization/agency capacity factors demonstrated more consistent association with mis-implementation outcomes than individual skills of staff. These factors demonstrated an inverse relationship with mis-implementation outcomes (e.g., as agency capacity increased, the association with mis-implementation rates decreased). These findings are consistent with our earlier study among US local health departments, which found organizational supports for EBDM were associated with lower perceived frequency of inappropriate continuation [21]. This suggests agency culture and capacity are significant protective factors against mis-implementation in multiple public health organizations rather than the skills of individual staff. Importantly, the agency-level variable reporting that the number of layers of authority impedes decision-making about programs continuation or ending was found to be strongly associated with both inappropriate termination and continuation. This suggests that highly vertical organizations may be more vulnerable to ineffective decision-making around program continuation or ending. Given that state health departments vary widely in their organizational structures, further work is needed to understand how a large number of layers may affect decision making that leads to more frequent use of evidence-based decision making in public health practice [17].

Outside of funding, the primary correlates for inappropriate termination or continuation were changing support from leaders and policymakers. We saw more variability in the reasons for inappropriate continuation versus termination. Inappropriate termination was heavily skewed towards funding priorities changing or ending, which is to be expected given the predominance of state public health programs based on time-limited grant funding [43]. The top four reasons for inappropriate continuation were more spread out across multiple domains. This variability in reasons could demonstrate that the processes that result in an ineffective program continuing may tend to involved multiple domains, but this also allows for more opportunity for modifiability.

The two factors most strongly negatively correlated with inappropriate continuation related to leaders’ flexibility were: work unit’s leaders are competent at managing change (r=− 0.30); and leadership reacts to critical issues regarding the implementation of EBIs (r=− 0.30). The two factors most strongly correlated with inappropriate termination were: work unit leadership react to critical issues regarding the implementation of EBIs; work unit leadership encourages planning for sustainability of programs (r= − 0.28, − 0.27 respectively). Again, this suggests agency culture and leadership are strong drivers of mis-implementation outcomes but more specifically how leadership can be related to the importance of EBI use and flexibility in program implementation and adaption.

Our findings are largely consistent with the literature in EBDM and have several implications for public health practice. Reviews found organizational climate, leadership support, staff commitment and skills, adequate staffing and low staff turnover, organizational resources, and partnerships affect EBI sustainability [4, 44, 45]. Policy, in the form of legislation and regulation, are also associated with sustainment of programs in community, clinical and social service settings [45]. Engaging community leaders and other policy makers throughout programmatic decision-making can increase likelihood of program sustainment [44]. While de-implementation of ineffective clinical tests or services has been studied, there is sparse parallel literature on de-implementation of ineffective public health programs [8, 46,47,48]. As our study illustrates, effective public health practice is not solely based on the effectiveness of the programs themselves but also the capacity of the organizations deliver them. Capacity is multi-faceted, and understanding an organization’s culture and hierarchy could reveal more about successful public health program implementation.

Limitations

Our response rates across states was varied enough that we were not able to study state-level correlates in detail. In the absence of other organizational and administrative data, this study relied on self-report surveys of individual and perceived organizational characteristics. While we asked respondents their level of involvement in decision-making, they were not always in the position to be privy to the reasons about decision-making or they joined the agency after a decision about a program had concluded.

Compared with previous pilot work, perceived frequency of mis-implementation in SHD was higher in this study (36.5% vs 50.7% for inappropriate termination and 24.7% vs 48.5% for inappropriate continuation), although some of this difference may be attributable in part to updates to the mis-implementation survey item definitions and changes in the approach to categorization of responses [9, 10, 21]. In earlier studies, the recoded dichotomized mis-implementation variables only included the often/always response. After examining the distribution of the mis-implementation variables responses, we thought it was important to include the “sometimes” response in categorizing mis-implementation because “sometimes” still captured the phenomena occurring and that excluding it could potentially leave out nuances in the data.

Future directions

These results provide a first look at factors that may be related to the phenomena of mis-implementation in public health practice. Later phases of this study include eight case studies highlighting lessons learned around mis-implementation and agent-based modeling to identify the dynamic interactions between the individual, organizations and contextual factors and disseminate them back to the state health departments [8]. The results of these qualitative case studies will be available in future publications. These models should provide decision-making tools to better facilitate evidence-based decision making.

There is also a need to explore mis-implementation in other public health settings. While our study focuses on SHDs, local health departments and non-profit settings are significant implementers of public health programs. There is also sparse information on how mis-implementation may vary across public health program areas (e.g., chronic disease, infectious disease, maternal and child health). Additional comparisons of organizational structures across state health departments could also explore the context underlying the “flattening” variable we found as an important correlate of mis-implementation.

Conclusion

While our understanding of mis-implementation in public health practice is in an early stage, our findings provide practitioners and applied researchers some actionable findings. For example, based on our study and related literature [18, 49, 50]. it appears that efficiency and effectiveness may be gained via flattening of public health agencies along with an organizational culture that supports EBDM. Given the emergence of evidence that chronic diseases are a significant moderating factor in outcome of timely disease concerns i.e. COVID-19 and cancer risk [51, 52], suggestions like these could help maximize dollars spent on public health programs ensuring that appropriate evidence-based programs are contributing to improved health outcomes and benefiting the communities they serve.

Availability of data and materials

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Notes

  1. A report on state and territorial public health agencies outlining their structures, activity, financial and workforce resources. The report is updated every 2–3 years [41].

Abbreviations

SHD:

State Health Department

EBDM:

Evidence Based Decision-making

EBIs:

Evidence-Based Interventions

ASTHO:

Association of State and Territorial Health Organizations

EBPs:

Evidence based programs.

NACDD:

National Association of Chronic Disease Directors

References

  1. Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health. 2009;30:175–201.

    Article  PubMed  Google Scholar 

  2. Brownson RC, Gurney JG, Land GH. Evidence-based decision making in public health. J Public Health Manage Pract. 1999;5(5):86–97.

    Article  CAS  Google Scholar 

  3. Proctor E, Luke D, Calhoun A, McMillen C, Brownson R, McCrary S, Padek M. Sustainability of evidence-based healthcare: research agenda, methodological advances, and infrastructure support. Implement Sci. 2015;10:88.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Wiltsey Stirman S, Kimberly J, Cook N, Calloway A, Castro F, Charns M. The sustainability of new programs and innovations: a review of the empirical literature and recommendations for future research. Implement Sci. 2012;7:17.

    Article  PubMed  PubMed Central  Google Scholar 

  5. Curry S, Byers T, Hewitt M. Fulfilling the potential of Cancer prevention and early detection. Washington, DC: National Academies Press; 2003.

    Google Scholar 

  6. Remington P, Brownson R, Wegner M. Chronic disease epidemiology and control, 4th edn. Washington, DC: American Public Health Association; 2016.

    Book  Google Scholar 

  7. Colditz GA, Wolin KY, Gehlert S. Applying what we know to accelerate cancer prevention. Sci Transl Med. 2012;4(127):127rv124.

    Article  Google Scholar 

  8. Padek M, Allen P, Erwin PC, Franco M, Hammond RA, Heuberger B, Kasman M, Luke DA, Mazzucca S, Moreland-Russell S, et al. Toward optimal implementation of cancer prevention and control programs in public health: a study protocol on mis-implementation. Implement Sci. 2018;13(1):49.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Furtado KS, Budd EL, Armstrong R, Pettman T, Reis R, Sung-Chan P, Wang Z, Brownson RC. A cross-country study of mis-implementation in public health practice. BMC Public Health. 2019;19(1):270.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Brownson RC, Allen P, Jacob RR, Harris JK, Duggan K, Hipp PR, Erwin PC. Understanding mis-implementation in public health practice. Am J Prev Med. 2015;48(5):543–51.

    Article  PubMed  PubMed Central  Google Scholar 

  11. Food and Nutrition Service (USDA). Child Nutrition Programs: Flexibilities for Milk, Whole Grains, and Sodium Requirements. 7 CFR Parts 210, 215, 220, and 226. 2018;83(238). https://www.govinfo.gov/content/pkg/FR-2018-12-12/pdf/2018-26762.pdf. Accessed 15 Mar 2020.

  12. Mansfield JL, Savaiano DA. Effect of school wellness policies and the healthy, hunger-free kids act on food-consumption behaviors of students, 2006–2016: a systematic review. Nutr Rev. 2017;75(7):533–52.

    Article  PubMed  Google Scholar 

  13. Briant KJ, Wang L, Holte S, Ramos A, Marchello N, Thompson B. Understanding the impact of colorectal cancer education: a randomized trial of health fairs. BMC Public Health. 2015;15:1196.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Berwick DM. Screening in health fairs. A critical review of benefits, risks, and costs. Jama. 1985;254(11):1492–8.

    Article  CAS  PubMed  Google Scholar 

  15. Dreisinger M, Leet TL, Baker EA, Gillespie KN, Haas B, Brownson RC. Improving the public health workforce: evaluation of a training course to enhance evidence-based decision making. J Public Health Manage Pract. 2008;14(2):138–43.

    Article  Google Scholar 

  16. Gibbert WS, Keating SM, Jacobs JA, Dodson E, Baker E, Diem G, Giles W, Gillespie KN, Grabauskas V, Shatchkute A, et al. Training the Workforce in Evidence-Based Public Health: An Evaluation of Impact Among US and International Practitioners. Prev Chronic Dis. 2013;10:E148.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Brownson RC, Fielding JE, Green LW. Building capacity for evidence-based public health: reconciling the pulls of practice and the push of research. Annu Rev Public Health. 2018;39:27–53.

    Article  PubMed  Google Scholar 

  18. Brownson RC, Allen P, Duggan K, Stamatakis KA, Erwin PC. Fostering more-effective public health by identifying administrative evidence-based practices: a review of the literature. Am J Prev Med. 2012;43(3):309–19.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Harris AL, Scutchfield FD, Heise G, Ingram RC. The relationship between local public health agency administrative variables and county health status rankings in Kentucky. J Public Health Manage Pract. 2014;20(4):378–83.

    Article  Google Scholar 

  20. Norton WE, Chambers DA. Unpacking the complexities of de-implementing inappropriate health interventions. Implement Sci. 2020;15(1):2.

    Article  PubMed  PubMed Central  Google Scholar 

  21. Allen P, Jacob RR, Parks RG, Mazzucca S, Hu H, Robinson M, Dobbins M, Dekker D, Padek M, Brownson RC. Perspectives on program mis-implementation among U.S. local public health departments. BMC Health Serv Res. 2020;20(1):258.

    Article  PubMed  PubMed Central  Google Scholar 

  22. The State Health Department. The American Public Health Association Website. 1968. https://www.apha.org/policies-and-advocacy/public-health-policy-statements/policy-database/2014/07/21/09/50/the-state-health-department. Accessed 11 Jan 2020.

  23. FY 2018 Grant Funding Profile. The Centers for Disease Control and Prevention Website. Updated May 21, 2020. https://www.cdc.gov/fundingprofiles/. Accessed 21 Nov 2019.

  24. Sustaining State Funding for Tobacco Control. The Centers for Dieases Control and Prevention Website. Updated March 30, 2020. https://www.cdc.gov/tobacco/stateandcommunity/tobacco_control_programs/program_development/sustainingstates/. Accessed 21 Nov 2019.

  25. Erwin PC, Padek MM, Allen P, Smith R, Brownson RC. The association between evidence-based decision making and accreditation of state health departments. J Public Health Manage Pract. 2020;26(5):419–27.

    Article  Google Scholar 

  26. Aarons GA, Ehrhart MG, Farahnak LR. The implementation leadership scale (ILS): development of a brief measure of unit level implementation leadership. Implement Sci. 2014;9(1):45.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Allen P, Sequeira S, Jacob RR, Hino AA, Stamatakis KA, Harris JK, Elliott L, Kerner JF, Jones E, Dobbins M, et al. Promoting state health department evidence-based cancer and chronic disease prevention: a multi-phase dissemination study with a cluster randomized trial component. Implement Sci. 2013;8:141.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Luke DA, Calhoun A, Robichaux CB, Elliott MB, Moreland-Russell S. The program sustainability assessment tool: a new instrument for public health programs. Prev Chronic Dis. 2014;11:130184.

    Article  PubMed  Google Scholar 

  29. Jacobs JA, Clayton PF, Dove C, Funchess T, Jones E, Perveen G, Skidmore B, Sutton V, Worthington S, Baker EA, et al. A survey tool for measuring evidence-based decision making capacity in public health agencies. BMC Health Serv Res. 2012;12:57.

    Article  PubMed  PubMed Central  Google Scholar 

  30. Reis RS, Duggan K, Allen P, Stamatakis KA, Erwin PC, Brownson RC. Developing a Tool to Assess Administrative Evidence-Based Practices in Local Health Departments. Front Public Health Serv Syst Res. 2014;3(3). https://doi.org/10.13023/FPHSSR.0303.02.

  31. Hannon PA, Fernandez ME, Williams RS, Mullen PD, Escoffery C, Kreuter MW, Pfeiffer D, Kegler MC, Reese L, Mistry R, et al. Cancer control planners' perceptions and use of evidence-based programs. J Public Health Manage Pract. 2010;16(3):E1–8.

    Article  Google Scholar 

  32. Stamatakis KA, McQueen A, Filler C, Boland E, Dreisinger M, Brownson RC, Luke DA. Measurement properties of a novel survey to assess stages of organizational readiness for evidence-based interventions in community chronic disease prevention settings. Implement Sci. 2012;7(1):65.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Stamatakis KA, Leatherdale ST, Marx CM, Yan Y, Colditz GA, Brownson RC. Where is obesity prevention on the map?: distribution and predictors of local health department prevention activities in relation to county-level obesity prevalence in the United States. J Public Health Manage Pract. 2012;18(5):402–11.

    Article  Google Scholar 

  34. Mancini JA, Marek LI. Sustaining community-based programs for families: conceptualization and measurement*. Fam Relat. 2004;53(4):339–47.

    Article  Google Scholar 

  35. Rye M, Torres EM, Friborg O, Skre I, Aarons GA. The evidence-based practice attitude Scale-36 (EBPAS-36): a brief and pragmatic measure of attitudes to evidence-based practice validated in US and Norwegian samples. Implement Sci. 2017;12(1):44.

    Article  PubMed  PubMed Central  Google Scholar 

  36. Kothari A, Edwards N, Hamel N, Judd M. Is research working for you? Validating a tool to examine the capacity of health organizations to use research. Implement Sci. 2009;4:46.

    Article  PubMed  PubMed Central  Google Scholar 

  37. Helfrich CD, Li YF, Sharp ND, Sales AE. Organizational readiness to change assessment (ORCA): development of an instrument based on the promoting action on research in health services (PARIHS) framework. Implement Sci. 2009;4:38.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Brennan SE, McKenzie JE, Turner T, Redman S, Makkar S, Williamson A, Haynes A, Green SE. Development and validation of SEER (seeking, engaging with and evaluating research): a measure of policymakers' capacity to engage with and use research. Health Res Policy Syst. 2017;15(1):1.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Shortell SM, McClellan SR, Ramsay PP, Casalino LP, Ryan AM, Copeland KR. Physician practice participation in accountable care organizations: the emergence of the unicorn. Health Serv Res. 2014;49(5):1519–36.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Brownson RC, Reis RS, Allen P, Duggan K, Fields R, Stamatakis KA, Erwin PC. Understanding administrative evidence-based practices: findings from a survey of local health department leaders. Am J Prev Med. 2013;46(1):49–57.

    Article  Google Scholar 

  41. Association of State and Territorial Health Officials. ASTHO Profile of State Public Health. vol. Four. Arlington, VA: Association of State and Territorial Health Officials; 2017.

    Google Scholar 

  42. Association of State and Territorial Health Officials. ASTHO Profile of State and Territorial Public Health, Volume Four. Arlington: Association of State and Territorial Health Officials; 2017. https://www.astho.org/Profile/Volume-Four/2016-ASTHO-Profile-of-State-and-Territorial-Public-Health/. Accessed 15 Oct 2019.

  43. Meit MKA, Dickman I, Brown A, Hernandez N, Kronstadt J. An Examination of Public Health Financing in the United States. (Prepared by NORC at the University of Chicago.). Washington, DC: The Office of the Assistant Secretary for Planning and Evaluation; 2013.

    Google Scholar 

  44. Hodge LM, Turner KMT. Sustained implementation of evidence-based programs in disadvantaged communities: a conceptual framework of supporting factors. Am J Community Psychol. 2016;58(1–2):192–210.

    Article  PubMed  Google Scholar 

  45. Shelton RC, Cooper BR, Stirman SW. The sustainability of evidence-based interventions and practices in public health and health care. Annu Rev Public Health. 2018;39:55–76.

    Article  PubMed  Google Scholar 

  46. Brownson R, Colditz G, Proctor E. Dissemination and implementation research in health: translating science to practice, 2nd edn. New York: Oxford University Press; 2018.

    Google Scholar 

  47. Norton WE, Kennedy AE, Chambers DA. Studying de-implementation in health: an analysis of funded research grants. Implement Sci. 2017;12(1):144.

    Article  PubMed  PubMed Central  Google Scholar 

  48. Colla CH, Mainor AJ, Hargreaves C, Sequist T, Morden N. Interventions aimed at reducing use of low-value health services: a systematic review. Med Care Res Rev. 2017;74(5):507–50.

    Article  PubMed  Google Scholar 

  49. Franco LM, Bennett S, Kanfer R. Health sector reform and public sector health worker motivation: a conceptual framework. Soc Sci Med. 2002;54(8):1255–66.

    Article  PubMed  Google Scholar 

  50. McConnell CR. Larger, smaller, and flatter: the evolution of the modern health care organization. Health Care Manager. 2005;24(2):177–88.

    Article  PubMed  Google Scholar 

  51. People with Certain Medical Conditions. Centers for Disease Control Website. Updated December 28, 2020. https://www.cdc.gov/coronavirus/2019-ncov/need-extra-precautions/people-with-medical-conditions.html?CDC_AA_refVal=https%3A%2F%2Fwww.cdc.gov%2Fcoronavirus%2F2019-ncov%2Fneed-extra-precautions%2Fgroups-at-higher-risk.html. Accessed 17 Apr 2020.

  52. Tu H, Wen CP, Tsai SP, Chow WH, Wen C, Ye Y, Zhao H, Tsai MK, Huang M, Dinney CP, et al. Cancer risk associated with chronic diseases and disease markers: prospective cohort study. BMJ. 2018;360:k134.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Padek et al. Predictors of mis-implementation of chronic disease control programs in state health departments. In Proceedings from the 12th Annual Conference on the Science of Dissemination and Implementation: Arlington, VA, USA. 4-6 December 2019. Implement Sci. 2020;15(Suppl 1):25.

Download references

Acknowledgments

This work was previously presented at the 12th Annual Conference on the Science of Dissemination and Implementation in Washington D.C. in December 2019. The abstract from that presentation was published in proceedings from that conference at Implementation Science [53].

We would like to acknowledge Melissa Franco for help with initial survey development, testing and data collection and Rebekah Jacob for consultation regarding the data analysis. We’d like to acknowledge other members of our research team who have provided feedback and input about the survey development, data collection process and data analyses: Sarah Moreland-Russell, Ross Hammond, Paul Erwin, Joe Ornstein, and Matt Kasman. We would like to acknowledge our stakeholder advisory board which consisted of former state health department employees who provided feedback throughout this process. The National Association of Chronic Disease Directors have also provided consultation during this project.

Funding

This project is funded by the National Cancer Institute of the National Institutes of Health (R01CA214530). Additional support for this project came from National Cancer Institute (P50CA24431, T32CA190194), the Centers for Disease Control and Prevention (U48DP006395). The findings and conclusions in this paper are those of the authors and do not necessarily represent the official positions of the National Institutes of Health or the Centers for Disease Control and Prevention. The funders did not have any influence on the design of the study, data collection, data analysis, interpretation of the data or in the writing of the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

The authors contributions are as follows: MMP coordinated the survey development, data collection, conducted initial data analysis and led the writing of the manuscript. SM, PA, & DL contributed to survey development, provided feedback on data analysis and edited and reviewed the final manuscript. EWR assisted with data collection, conducted preliminary data analysis and provided input in the final manuscript. ET provided feedback on data analysis and provided input on final manuscript. RCB is the principal investigator of this study and contributed to survey development, data analysis feedback and reviewed the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Margaret M. Padek.

Ethics declarations

Ethics approval and consent to participate

Human subjects’ approval was obtained from the Washington University in St. Louis Institutional Review Board (#201812062). This study received exempt status per the U. S Health and Human Services, Office for Human Research Protections guidelines 45 CFR 46.101(b) (2). The first page of the online survey required participants to click the “I Consent” button in order to progress forward and participate in the study. Participants were sent an Exempt IRB Information Sheet with their email invitation that noted participation in the survey was indicative of their consent.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Mis-Implementation National Survey.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Padek, M.M., Mazzucca, S., Allen, P. et al. Patterns and correlates of mis-implementation in state chronic disease public health practice in the United States. BMC Public Health 21, 101 (2021). https://doi.org/10.1186/s12889-020-10101-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12889-020-10101-z

Keywords