This paper reported the quantitative results of the effect of a knowledge-broking team on the use of obesity-related evidence in policy briefs. Specifically, it reported on changes in perceptions of evidence-informed policy making skills among employees in four selected government ministries and two NGOs. The findings indicated no changes in participants’ perceptions of their organization’s utilization of research, however modest positive changes (increases) were observed on the three individual-level scales although only one effect (“assess” scale) was significant. Findings from follow up descriptive analyses failed to reveal any ‘dose’ effect – there was no consistent evidence indicating greater (positive) change for those participants who more strongly engaged with the TROPIC program although changes were marginally stronger for those categorized as moderately or highly engaged compared with those lowly engaged. There was also no effect on individual’s global self-rating of understanding of research. Findings from analyses of perceptions at the (aggregated) organizational level and individual level indicated that effects (ie. changes in perceptions in research utilization) were not different to those expected by chance.
Overall, the quantitative findings reported herein, generally failed to show any consistent effects for the TROPIC knowledge exchange research program. The only significant effect was for the individual scale of “assess” where scores were improved - indicating that individual participants perceived that their ability to critically evaluate research methodology, assess the relevance of research and synthesize different pieces of research, was better after the TROPIC intervention. These specific results are consistent with previously reported findings from the interview data [12]. While the results for the two other individual scales (ie. “looking for research”, “adapt”) showed positive changes, these effects were smaller in magnitude and non-significant. The interview findings provide some explanation for the modest effects; participants indicated that accessing research was limited by resources and infrastructure (access to databases, computers, internet), as well as time constraints and in some cases, a perceived lack of organizational support [12]. The failure of the TROPIC intervention to show any significant effects at the organizational-level is not surprising since it is unlikely that adjusting the capabilities and skills of a small number of personnel within any one organization would be sufficient to alter the perceptions of research utilization capability at an organizational level. Even when individual data was aggregated by organization, no positive effects were discerned. Thus, while there was some indication of positive change in research utilization at an individual level, there was no evidence of perceived change at the organizational level suggesting that the reach of the TROPIC intervention within the participating organizations was limited. It is well recognized that shifts in organizational culture require multi-faceted approaches over time [9, 18]. Exploration of the factors that might facilitate or impede the reach of similar interventions need to be clarified.
The application of TROPIC knowledge exchange research program through a knowledge-broking approach is important given the explanatory nature of the culture and behavioral factors that influence decision making. While Waqa, Mavoa et al. [12] have previously reported that many participants believed they had increased skills in acquiring, accessing, adapting and applying evidence following the TROPIC project, and that their reporting had become based more on credible research evidence rather than perceptions and anecdotal “evidence”, the results of the quantitative evaluation failed to reveal consistent intervention effects at an individual level and no effect at an organizational level. A number of factors may account for this apparent inconsistency of effects. First, while Waqa, Mavoa et al. [12] captured qualitative impressions of the TROPIC intervention, the present study reports only on quantitative outcomes. Given the myriad of challenges faced when operationalizing the intervention, discerning (possible) intervention effects through quantitative assessment is problematic and unlikely to be able to capture the more personalized perceptions of the benefits of the intervention. Second, a number of participants commented in their post-intervention interviews that they had overrated their knowledge of research when they entered the TROPIC program and completed the initial IRWFY survey. It was only when they gained an increased understanding of research and its application that they felt they were able to provide a realistic score. Third, high staff turn-over in all six organizations compromised continuity of staff engagement in the program. Fourth, many participants had roles that required multiple tasks, resulting in fragmented and limited time to allocate to TROPIC activities. Natural disasters (ie. a cyclone and two major floods) and concomitant public health emergencies further fragmented time available to engage in TROPIC; Fiji experienced several natural disasters that necessitated governmental and NGO action which subsequently diverted staff from attending workshops and completing policy briefs. Fifth, only about one third of the organizations received tangible high-level political support from the relevant ministers (indicated by the presence of permanent secretaries (deputy ministers) and directors during presentation of policy briefs to the organization). The importance of high-level organizational support was reflected in the higher number of policy briefs produced by these organizations.
A number of other organizational barriers also limited the impact of using evidence in the development of policy briefs. Whilst these barriers varied across the six participating organizations, they included factors such as the reallocation of participants onto other activities despite their reported interest in the intervention activities and broader project, lack of organizational support and incentives to persist with policy development work in the face of other organizational priorities as well as the lack of information and technology resources (eg. database software) to enable storage of extracted evidence from scientific literature. Limitations of this nature have been reported elsewhere [19] and remain a challenge to embedding evidence-informed policy making into organizations. These challenges are heightened in low to middle income countries with limited economic and human resources and less capacity to either access or adapt evidence for policy documents [20], or foster a culture that supports and extends evidence-informed policy making [4, 21]. These limitations make it difficult to foster and sustain a culture, structures and processes that support evidence-informed policy making within organizations.
More generally, the relatively short duration of TROPIC program (12–18 months per organization) may have been insufficient for the program activities to be integrated into the individual participants’ working practices but more especially the practices of the organization within which the participants operated. Almost half of the participants had less than 9 months engaged in the TROPIC program. It has been suggested that 15–18 months is the shortest duration that can be expected to produce change in evidence-informed decision making [8, 22], however, competing priorities of participating organizations, the timing of entry into the project vis-a-vis organizational policy cycles, and limited resources of the TROPIC team meant that it was not possible to provide each organization with the optimal duration of intervention. The effectiveness of future knowledge-broking programs may require a longer duration, as well as greater staff engagement – that is, both a higher proportion of staff within organizations need to be engaged and staff also need to be provided with support to continue their participation in all of the intervention activities in the face of other more important priorities. Consideration of these and other factors, including longer training periods and more individualized tailoring of intervention strategies, as well as evaluation designs that allow sufficient time for effects to permeate through the culture of organizations before post-intervention assessment, is needed before measurable shifts in the utilization of research can be discerned.
Another factor that may have contributed to the limited (quantitative) effects is our choice of survey instrument. We used the IRWFY instrument, one of the few available in 2009. The instrument has good usability, strong response variability and adequate discriminant validity [16], however it was not designed to assess intervention effects but rather for organizational self-assessment, to ‘scan’ and generate discussion about how research is used. Thus, it is possible that the IRWFY tool was not sufficiently sensitive to enable detection of relatively small changes in individual or organizational knowledge and practices. This, in conjunction with the relatively small final sample at post-intervention may further explain the limited and inconsistent findings. Nevertheless, the magnitude of the observed effects (small) indicate that even though the potential for finding a “statistically significant” intervention effect may have been low, the observed effects were small in any case and point to other more program-specific factors as we have noted.
While multiple competing priorities in most participating organizations limited the impact of using evidence in policy briefs, the knowledge-broking team offered a flexible schedule of activities, organized the workshops away from participants’ workplace and in a block as well as providing a sequence of policy brief writing retreats that were timed to be accessible for participants in each organization. In the interviews, participants indicated that barriers to evidence-informed policy making were not just individual lack of knowledge about data sources, but also organizational. Participants cited inadequate time to develop evidence-informed briefs, and insufficient resources for accessing and managing evidence as barriers [12]. Embedding of evidence-informed policy making within organizational structures requires a critical mass of people with skills to acquire, assess and adapt evidence to inform policy, the availability of timely, relevant evidence in language that resonates for policy-makers, an organizational culture where there are clear structures and processes in place to support evidence-informed decision making, and that recognises and rewards the use of evidence-informed decision making, and strong researcher-end-user relationships [21]. Organizations who participated in TROPIC are now well placed to build on: 1) excellent relationships with researchers, and 2) the growing number of personnel who have acquired evidence-informed decision making skills. The next challenge is to continue to develop a culture that builds a solid organizational infrastructure to support evidence-informed decision making that informs all policies that have potential health benefits. Post-TROPIC initiatives to seek a similar experience for more personnel by at least one of the participating organizations suggests that there is motivation to continue to build a critical mass of staff with evidence-informed policy making skills.
The TROPIC study has provided some insights of knowledge-broking approaches to support evidence-informed policy development that are generic and can be transferred to any policy area. The value placed on types of evidence within decision making contexts is dependent on individuals, the organizations in which they work and the systems in which they operate. Decision making processes are also context-dependent [23]. A supportive organizational environment is especially important in the transferability of skills in any low- or middle-income country with limited policy making resources. This observation is consistent with other studies [4, 20, 21].
Strengths of this study include its uniqueness in a number of respects. The knowledge-broking team employed a number of complementary approaches to facilitate evidence-informed policy making, including specific workshops tailored to the needs of individual organizations as well as individual knowledge-broking sessions whereby participants received personalized guidance about accessing and utilizing research evidence to inform development of policy briefs. The knowledge-broking team also provided broader mentoring support for individual participants, assisted in aligning timelines for policy brief completion to policy timelines such as approval of annual plans, protecting time and also sharpening their understanding of how to plan and draft policy briefs. The team also developed a policy brief template that guided writing, as well as a template for presenting briefs to higher decision making levels [13]. A number of limitations of the study are acknowledged. Participants’ lack of understanding of what constitutes evidence resulted in many participants overestimating their evidence-informed policy making skills when entering the intervention [13]. This overestimation, along with the disparity in participants’ basic skills and availability required more individual mentoring than the team had anticipated. The project timelines and capacity limits of the TROPIC team, meant that it was not possible to individually tailor each of the knowledge-broking strategies. The duration of the intervention (≤ 9 months for almost half of the participants) may have been insufficient to demonstrate significant changes at an individual level and certainly at an organizational level. The project timelines also meant that there was a relatively brief period from commencement (and conclusion) of the intervention activities and post-intervention assessment. Dedicated embedding of the knowledge and skills gained within organizational structures was beyond the scope of this 3-year project. The attrition of participants from several organizations meant that a substantial proportion of participants engaged in limited intervention activities and furthermore were also unavailable for follow up surveying. The IRWFY instrument used to assess quantitative changes in research utilization lacked sensitivity to discern any intervention effect. The TROPIC program evaluation did not explore the barriers to evidence awareness, knowledge and uptake.