Assessing community readiness online: a concurrent validation study
BMC Public Health volume 15, Article number: 598 (2015)
Community readiness for facilitation and uptake of interventions can impact the success of community-based prevention efforts. As currently practiced, measuring community readiness can be a resource intensive process, compromising its use in evaluating multisite community-based prevention efforts. The purpose of this study was to develop, test and validate a more efficient online version of an existing community readiness tool and identify potential problems in completing assessments. This study was conducted in the context of a complex community-based childhood obesity prevention program in South Australia.
Following pre-testing, an online version of the community readiness tool was created, wherein respondents, with detailed knowledge of their community and prevention efforts, rated their communities on five anchored rating scales (Knowledge of Efforts, Leadership, Knowledge of the Issue, Community Climate, and Resources). Respondents completed the standard, over-the-phone community readiness interview (“gold standard”) and the new online survey. Paired t-test, St. Laurent’s correlation coefficient and intra-class correlation (ICC) were used to determine the validity of the online tool. Contact summary forms were completed after each interview to capture interview quality.
Twenty-five respondents completed both assessments. There was a statistically significant difference in the overall community readiness scores between the two methods (paired t-test p = 0.03); online scores were consistently higher than interview scores. St. Laurent’s correlation of 0.58 (95 % CI 0.42–0.73) was moderate; the ICC of 0.65 (95 % CI 0.35–0.83) was good. Only for the leadership and resources dimensions was there no statistically significant difference between the scores from the two methods (p = 0.61, p = 0.08 respectively). St Laurent’s correlation (r = 0.83, 95 % CI 0.71–0.92) and the ICC (0.78, 95 % CI 0.57–0.90) were excellent for leadership. Qualitative results from the standard interview method suggest that some respondents felt reluctant to answer questions on behalf of other community members. This may have impacted their self-selected ratings and/or responses to questions during the interview.
Concurrent validity for the online method was supported for the Leadership dimension only. However, the online method holds promise as it reduces time and resource burden, allowing for a quicker return of results to the community to inform program planning, implementation and evaluations to improve community health.
The implementation and ultimate success of community-based health interventions may be greatly impacted by contextual factors such as a community’s social and built environments [1–3]. A comprehensive evaluation of the outcomes of prevention efforts requires understanding the contextual factors that may have impacted these outcomes .
The level of effort that a community is prepared to apply and ready to respond to a particular issue is known as ‘community readiness to change’. It combines several contextual constructs which can be difficult to quantify yet are key to understanding the success or lack thereof of community-based health efforts [5, 6]. Contextual constructs often cited as key dimensions of community readiness include community awareness and knowledge of the issue and current efforts to address it, leadership knowledge and support, and community resources and capacity [7, 8].
The Community Readiness Tool (CRT) developed by Edwards et al.  is widely used for assessing community readiness and has been identified as useful for evaluation and program planning purposes [9–12]. This tool was created to apply the Community Readiness Model (CRM) which combines and expands upon the personal stages of change  and community development principles . The CRM defines five dimensions of community readiness which are scored through the CRT. These dimensions are: Community Knowledge of Existing Efforts, Leadership, Community Climate, Community Knowledge about the Issue, and Resources. The standard method of administering the CRT is detailed elsewhere . Briefly, in order to assess community readiness the CRT protocol requires 4–6 semi-structured interviews per community, each taking approximately 45 minutes. Interviews are transcribed and scored by two independent scorers using anchored rating scales. The CRT has been used successfully in a number of different settings [16–19] for a wide variety of applications. However, this process is time and resource intensive, limiting use of CRT to small-scale or single site communities, and preventing community readiness evaluation of large scale, multi-site, community-based health promotion initiatives [20, 21].
To improve the resource efficiency of the CRT, we developed an online survey version of the tool. This report describes our concurrent validation of the online survey using the standard phone version of the tool against our online protocol. We hypothesised that there would be no difference in the overall community readiness score between the gold standard interview method and the online survey version. We also hypothesised no differences in dimension scores between the two methods.
The setting for this study is the Obesity Prevention and Lifestyle (OPAL) program currently underway in 21 rural and urban communities in South Australia (n = 20) and the Northern Territory (n = 1) . The goal of OPAL is “to improve eating and activity patterns of children, through families and communities in OPAL regions, and thereby increase the proportion of 0–18 year olds in the healthy weight range”. Each participating community received $75,000 per year in funding for project implementation as well as two full time staff. Interventions are implemented within each community at the discretion of the local staff who consult with host agencies, partners and community members, although they must align with the OPAL program framework and principles drawn from the French EPODE model . The 21 communities have staggered starting points over four years: the first set of communities began in late 2009 and the final stage commenced in late 2012. The communities have varying demographic and geographic profiles and contain between 1 and 45 suburbs (mean = 10.3, median = 9). For this study, the suburb was the unit of analysis. Each community provided at least one key informant who completed the CRT for a chosen suburb within the community. A full community readiness assessment was not completed for any given suburb or community as this study focussed on the validation of the online version of the CRT; the CRT was validated at the individual level, not at the community level. In other words, we are comparing whether the scores for the gold standard interview for an individual were the same as the individual’s online scores.
Ethical approval and consent
Ethical approval for this study was granted by the South Australian Department of Health Human Research Ethics Committee (reference number 327/11/2012) and the University of South Australia Human Research Ethics Committee (reference number 25002). All participants were given an information sheet, had the opportunity to ask questions about the study, and signed a consent form prior to participating.
Sampling and validation
Key informants (n = 30) for the concurrent validation study were selected based on their knowledge of obesity prevention activities being implemented within a given suburb. The positions of the key informants varied between suburbs and included elected councillors, local council staff, teachers, childcare centre and other non-government organisation directors, and OPAL staff. Key informants were asked to complete two versions of the CRT. Firstly they were invited to participate in the gold standard, telephone interview. This semi-structured interview consisted of the 20 core questions of the CRT with the required adaptations to refer to the issue at hand (childhood obesity prevention) and the community (in this case suburb) in question. In order to ensure the best chance at validity, the interviewer underwent rigorous training by experienced CRT researchers on interviewing best practices. Immediately following the telephone interview a contact summary form was completed by the interviewer, as recommended by Miles, Huberman and Saldana . It provided opportunity for the interviewer to comment on the overall quality of the interview (i.e., depth of responses, any interruptions to the interview, tentativeness in responding to questions), sections of the interview which were problematic or poorly answered (i.e. respondent did not possess requisite information), or any other noteworthy comments (i.e. very low levels of readiness, buy-in from the respondent). A minimum of four weeks later, participants were invited to complete the online survey version of the tool. This ‘wash out’ period was to ensure that the phone interview questions did not influence the online responses. Informants completed the phone interview first because exposing informants to the anchored rating scales prior to the phone interview could have compromised the gold standard methodology. Change in readiness over the one month ‘wash out’ period was unlikely given the difficulty in increasing or decreasing community readiness. Previous research using the CRT over multiple time points has found that every year of intervention was associated with a 0.6 increase in overall community readiness .
To promote high quality scoring, interviews were transcribed and scored by two expert scorers who independently read through each interview transcript six times – once without scoring and then five more times to score each dimension separately. The dimensions were scored on anchored rating scales with scorers starting at the lowest readiness level and looking for evidence of the one above until no such evidence could be found. Once the two scorers completed their independent scoring, they came together to discuss their scores and come to a final consensus score for each dimension. The overall community readiness score (between 1 and 9) was then calculated by taking the average of each dimension score .
Online version of the CRT
Development of the online version of the tool was facilitated using Survey Monkey online survey software. The survey began with the same definition of the issue (childhood obesity) as the standard interview. However, unlike the gold standard interview version of the CRT where respondents are asked questions with their responses being scored on the anchored rating scales, the online version of the tool asks respondents to directly score a specific suburb of interest on each of five anchored ranking scales corresponding to the five community readiness dimensions (Community Knowledge of Efforts, Leadership, Community Knowledge of the Issue, Community Climate, and Resources). The anchored rating scales can be found in the Community Readiness Handbook . Thus, the scoring process is conducted by the respondent online, rather than by the researcher through a transcript. The core questions of the gold standard tool are used as prompts in the instructions for each dimension, but are not specifically answered in the online tool. Once the respondent had scored each of the five anchored rating scales an overall community readiness score (between 1 and 9) was calculated by taking the average of each dimension score.
The online survey underwent a pre-testing process whereby three key informants completed the online survey and provided formal feedback through a semi-structured recorded phone interview. This feedback was used to improve the online version of the tool, with alterations made to survey instructions and visual elements of the anchored rating scales.
Scores on the gold standard (interview) and the online survey version of the tool were contrasted using St. Laurent’s gold standard correlation coefficient to assess the validity of the new administration method . This test is used to compare two different methods of measurement when one is a gold standard. In addition, the intra-class correlation coefficient (two way model with fixed raters – ICC 3,1), paired t-test and Wilcoxon signed ranks test were used to further test the differences in scores between the two administration methods. The use of multiple statistical tests is recommended to overcome the shortcomings of any single procedure . The Wilcoxon signed rank test was included as a non-parametric test in case normality assumptions were not met. Statistical tests were undertaken using the Pairs Module of WinPEPI v.11.39. A priori power calculation for a paired t-test estimated that 25 respondents were required to detect a mean difference of 0.50 in community readiness scores, with power of 0.80 and alpha of 0.05. Based on previous studies, the standard deviation was estimated at 0.85 . Contact summary forms were analysed through qualitative content analysis by the first and last authors. The specific form of directed content analysis was used . Initial coding categories of Interview Quality, Completeness of Information, and Salient/ Illuminating Issues were identified at the outset. Information within each category was coded to depict the range of responses or issues raised.
Thirty phone interviews were conducted with an average length of 33 min (range = 22–52, median = 32). However, two participants did not provide sufficient information during the interview to allow for scoring and three participants did not complete the online survey, leaving a final sample of 25 key informants. Average completion time of the online survey was 29 minutes (range = 10–60, median = 30).
The results for the overall and dimension scores are shown in Table 1. The online survey scored 0.39 (SD = 0.83) points higher than that for the phone interview method, with a range of −1.50 to 1.75 for difference scores. Overall community readiness scores for the telephone interview method ranged from 3.88 to 6.00 and the online scores ranged from 2.46 to 7.17. In the majority of cases (n = 18), the telephone and online scores were within 1 point of each other. The paired t-test revealed a statistically significant difference in overall scores between the two methods (p = 0.03). St. Laurent’s correlation coefficient was 0.58 (95 % CI 0.42–0.73), indicating a moderate correlation between the overall community readiness scores. The intra-class correlation calculated using a two-way model with fixed raters was 0.65 (95 % CI 0.35–0.83) a figure regarded as good reliability but less than the threshold for excellent (0.75) .
As data for three of the five (community climate, community knowledge of the issue, resources) dimension scores were not normally distributed, the Wilcoxon signed ranks test was used to test for differences between the two methods on these dimensions. Dimension scores for Leadership (paired t-test p = 0.61) and Resources (Wilcoxon signed ranks test, p = 0.09) did not differ between online and phone interview methods. Dimension scores for Knowledge of Existing Efforts (paired t-test, p = 0.01), Community Climate (Wilcoxon signed ranks test p < 0.00) and Community Knowledge about the Issue (Wilcoxon signed ranks test p = 0.01) were found to differ between administration methods. The data for the Resource dimension was highly skewed to scores above 6, consistent with the level of resourcing associated with the OPAL program. St. Laurent’s correlation coefficients were moderate (0.50–0.58) for all but the Leadership dimension, for which a strong correlation (0.83) was observed. Intra-class correlations demonstrated greater variation, but the Leadership dimension was the strongest at 0.78, indicating excellent reliability.
Content analysis of the telephone contact summary forms found that whilst most respondents provided answers with appropriate information for scoring, there was some unease when answering questions that provide information for scoring dimensions pertaining to the knowledge and attitudes of suburb residents (Table 2). Eleven respondents expressed reluctance to answer questions for the dimensions of Community Climate, Community Knowledge of the Issue, and Community Knowledge of the Existing Efforts as they did not feel they knew all of the residents and thus could not answer on their behalf. When this occurred, the interviewer clarified that the respondent was required only to answer for those whom they knew, resulting in elaborated answers in the majority of cases. This elaboration ensured that there was sufficient information for the scoring process to be completed. Questions relating to the dimensions of Leadership and Resources were answered without this same unease.
Discussion and conclusions
This concurrent validation study found that the overall results of an online administration of the CRT were significantly different from the gold standard interview method of administration, despite showing a good level of correlation across dimensions. Although the differences in scores between methods were not large, with most differences lying within one point, only the Leadership dimension demonstrated excellent reliability between the two methods of administration. Thus, only the Leadership dimension would appear to be ready for the online administration at this stage.
The largest differences between methods were observed for dimensions where respondents mentioned difficulty in answering for all residents of the suburb being rated – Community Knowledge of Efforts, Community Knowledge of the Issue, and Community Climate. Interestingly, for these dimensions, the interview scores were, on average, significantly lower than the online scores. This may reflect the lack of confidence felt by respondents in answering questions related to these dimensions, thus leading to less information for scorers to use in assigning a readiness level. Scorers are trained to use only the information provided by respondents and to only assign a level of readiness if wholly warranted by the respondents’ answers. Qualitative analyses uncovered weaknesses in both the online and telephone survey methods for these dimensions which had the largest quantitative differences. Dimensions relating to Community Knowledge of Existing Efforts, Community Climate and Knowledge of the Issue were at times answered poorly, with respondents reluctant to speculate on the attitudes of the broader community. Although the respondents are not required to know the attitudes and knowledge of everyone in their community, rather only those to whom they are exposed, the framing of the questions can be interpreted as though such knowledge is requisite. Improvements to the introductory explanation and core questions could help to alleviate these concerns. Despite the selection of informants based on their knowledge, organisation tenure within their place of work, and familiarity with their chosen suburb, some questions were difficult to answer. Respondents in this study required prompts to answer these questions fully; it is possible that more knowledgeable respondents would have provided richer answers for both the interview and online survey versions of the tool where such prompts are not possible.
Whilst previous research has identified time and resource challenges as limitations to the application of a full community readiness assessment [20, 21, 31], this tool has not undergone a formal qualitative component to unpack these difficulties. The results from the present study are the first in published literature to qualitatively assess interview quality of the CRT.
While improvements to the online survey are evidently still required, it is nonetheless considerably less difficult to administer than the standard telephone method. The online survey can be completed in the respondent’s own time, reducing burden not only for the evaluator, but for the participant. Furthermore, the online survey does not need to be recorded or transcribed before coding, and is simply scored by the participant themselves. The online method still requires participant recruitment, data entry and analysis as well as effort to set up and tailor the questions and scales to the appropriate issue and community. However, these tasks are also required in the standard telephone method. Whilst the online method does not remove all time and resource requirements, it does significantly lower them, making assessment of community readiness much more viable for large scale community-based programs where communities, neighbourhoods or suburbs are the unit of analysis.
All five community readiness dimensions are important for evaluating the current state of the community; however, assessing leadership readiness alone will provide valuable information for evaluators, community members and practitioners. Leaders within community may be formally elected representatives of the people or informal opinion leaders who have the capacity to influence their communities. Support from both formal  and informal  leaders is crucial to the success of public health programs. With the importance of leadership readiness data, and the statistical robustness of its online assessment put forward by the present study, the wider usage of the leadership component of the online community readiness survey is warranted.
Health program evaluations are increasingly incorporating greater flexibility and accountability, with results expected by the community on an on-going basis. This allows for greater responsiveness within interventions, but also necessitates simple and easy-to-use evaluation tools . As a result, laborious and resource intensive instruments are making way for those which are straightforward and quick to administer . This is particularly true in the current economic climate as time and resources for evaluation become increasingly scarce. Although the online medium is not without drawbacks, the ubiquitous nature of the online world in most developed countries holds great opportunities for health program evaluation. With further refinement, the online CRT will allow for simpler administration and completion, and consequently more accurate and speedy reporting. Ultimately, CRT may assist in informing and shaping prevention efforts in many areas of public health.
Community Readiness Model
Community Readiness Tool
Obesity Prevention and Lifestyle
Hoelscher DM, Springer AE, Ranjit N, Perry CL, Evans AE, Stigler M, et al. Reductions in child obesity among disadvantaged school children with community involvement: The Travis County CATCH trial. Obesity. 2010;18 Suppl 1:S36–44.
Economos CD, Hyatt RR, Goldberg JP, Must A, Naumova EN, Collins JJ, et al. A community intervention reduces BMI z-score in children: Shape Up Somerville first year results. Obesity. 2007;15(5):1325–36.
Weker H. Simple obesity in children. A study on the role of nutritional factors. Med Wieku Rozwoj. 2006;10(1):3–191.
Lawrence R, Bibbins-Domingo K, Brennan L, Daniels N, Gaskin D, Green L, Haveman R, Jenson J, Nieto F, Polsky D, Potvin L, Pronk N, Russel L, Teutsh S, White C. An Integrated Framework for Assessing the Value of Community-Based Prevention. Washington, DC: Institute of Medicine; 2012.
Edwards RW, Jumper-Thurman P, Plested BA, Oetting ER, Swanson L. Community readiness: Research to practice. J of Community Psychol. 2000;28(3):291–307.
Hawe P, Noort M, King L, Jordens C. Multiplying health gains: The critical role of capacity-building within health promotion programs. Health Policy. 1997;39(1):29–42.
Puska P, Nissinen A, Tuomilehto J, Salonen JT, Koskela K, McAlister A, et al. The community-based strategy to prevent coronary heart disease: conclusions from the ten years of the North Karelia project. Annu Rev Publ Health. 1985;6:147–93.
Goodman RM, Speers MA, McLeroy K, Fawcett S, Kegler M, Parker E, et al. Identifying and defining the dimensions of community capacity to provide a basis for measurement. Health Educ Behav. 1998;25(3):258–78.
Carlson LA, Harper KS. One facility’s experience using the community readiness model to guide services for gay, lesbian, bisexual, and transgender older adults. Adultspan Journal. 2011;10(2):66–77.
Sliwa S, Goldberg JP, Clark V, Collins J, Edwards R, Hyatt RR, et al. Using the community readiness model to select communities for a community-wide obesity prevention intervention. Prev Chron Dis. 2011;8(6):A150.
Frerichs L, Brittin J, Stewart C, Robbins R, Riggs C, Mayberger S, et al. SaludableOmaha: development of a youth advocacy initiative to increase Community Readiness for obesity prevention, 2011–2012. Prev Chron Dis. 2012;9, E173.
Kostadinov I, Daniel M, Stanley L, Gancia A, Cargo M. A Systematic Review of Community Readiness Tool Applications: Implications for Reporting. Int J Environ Res Public Health. 2015;12(4):3453–68.
Prochaska JO, DiClemente CC. Stages and processes of self-change of smoking: toward an integrative model of change. J Consult Clin Psychol. 1983;51(3):390–5.
Rogers E. Diffusion of Innovations. 3rd ed. New York: The Free Press; 1983.
Oetting ER, Plested B, Edwards RW, Thurman PJ, Kelly KJ, Beauvais F. Community readiness for community change: Tri-Ethnic Center community readiness handbook. Edited by Stanley L, 2nd edn: Colorado State University; 2014.
Aboud F, Huq NL, Larson CP, Ottisova L. An assessment of community readiness for HIV/AIDS preventive interventions in rural Bangladesh. Soc Sci Med. 2010;70(3):360–7.
Parker RN, Alcaraz R, Payne PR. Community readiness for change and youth violence prevention: a tale of two cities. Am J Community Psychol. 2011;48(1–2):97–105.
Peercy M, Gray J, Thurman PJ, Plested B. Community readiness: An effective model for tribal engagement in prevention of cardiovascular disease. Fam Community Health. 2010;33(3):238–47.
York NL, Hahn EJ, Rayens MK, Talbert J. Community readiness for local smoke-free policy change. Am J Health Promot. 2008;23(2):112–20.
Millar L, Robertson N, Allender S, Nichols M, Bennett C, Swinburn B. Increasing community capacity and decreasing prevalence of overweight and obesity in a community based intervention among Australian adolescents. Prev Med. 2013;56(6):379–84.
Ehlers DK, Huberty JL, Beseler CL. Is school community readiness related to physical activity before and after the Ready for Recess intervention? Health Educ Res. 2013;28(2):192–204.
Jones M, Cargo M, Cobiac L, Daniel M. Mapping the program logic for the South Australia Obesity Prevention and Lifestyle (OPAL) initiative. Obes Res Clin Pract. 2011;5(S1):17–8.
Romon M, Lommez A, Tafflet M, Basdevant A, Oppert JM, Bresson JL, et al. Downward trends in the prevalence of childhood overweight in the setting of 12-year school- and community-based programmes. Public Health Nutr. 2009;12(10):1735–42.
Miles MB, Huberman AM, Saldana J. Methods of Exploring. In: Qualitative Data Anlysis. 3rd ed. California, USA: Sage Publications; 2014. p. 122–30.
Jason LA, Pokorny SB, Kunz C, Adams M. Maintenance of community change: enforcing youth access to tobacco laws. J Drug Educ. 2004;34(2):105–19.
St Laurent RT. Evaluating agreement with a gold standard in method comparison studies. Biometrics. 1998;54(2):537–45.
Zaki R, Bulgiba A, Ismail R, Ismail NA. Statistical methods used to test for agreement of medical instruments measuring continuous variables in method comparison studies: a systematic review. PloS One. 2012;7(5), e37908.
Battaglia TA, Murrell SS, Bhosrekar SG, Caron SE, Bowen DJ, Smith E, et al. Connecting Boston’s public housing developments to community health centers: who’s ready for change? Prog Community Health Partnersh. 2012;6(3):239–48.
Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15(9):1277–88.
Shoukri MM, Pause CA. Statistical methods for health sciences. 2nd ed. Florida, USA: CRC Press; 1999.
Kunz CB, Jason LA, Adams M, Pokorny SB. Assessing police community readiness to work on youth access and possession of tobacco. J Drug Educ. 2009;39(3):321–37.
Swinburn BA. Obesity prevention: The role of policies, laws and regulations. Aust New Zealand Health Policy. 2008;5:12.
Valente TW, Pumpuang P. Identifying opinion leaders to promote behavior change. Health Educ Behav. 2007;34(6):881–96.
Brennan L, Castro S, Brownson RC, Claus J, Orleans CT. Accelerating evidence reviews and broadening evidence standards to identify effective, promising, and emerging policy and environmental strategies for prevention of childhood obesity. Annu Rev Public Health. 2011;32:199–223.
Carman JG. Evaluation practice among community-based organizations: Research into the reality. Am J Eval. 2007;28(1):60–75.
The authors gratefully acknowledge the contribution of the South Australian Department of Health’s OPAL program in providing funding for this research and the OPAL State-wide Evaluation Coordination Unit for their ongoing support. The views expressed are solely those of the authors and do not necessarily reflect those of the South Australian Government or any other Australian, state or local government.
IK declares that funding for his research is provided by SA Health’s OPAL program. MC, LS and MD declare that they have no competing interests.
IK was responsible for conducting phone interviews and qualitative data, organising online surveys, data entry. IK, MC and MD conducted the data analysis. IK, MC and LS developed the online surveys. IK and MC developed the qualitative data methodology and contact summary forms. All authors contributed to the writing and editing of the manuscript. All authors read and approved the final manuscript.
About this article
Cite this article
Kostadinov, I., Daniel, M., Stanley, L. et al. Assessing community readiness online: a concurrent validation study. BMC Public Health 15, 598 (2015). https://doi.org/10.1186/s12889-015-1953-5