Skip to main content

Table 3 Challenges arising from differences between RCT and implementation research paradigms

From: CONSORT to community: translation of an RCT to a large-scale community intervention and learnings from evaluation of the upscaled program

 

Experience

Response

Key learnings

Ethics

Ethics committees appeared to approach the Project from an RCT paradigm

Early requests from ethics committees included the addition of a control group and the de-identification of data prior to it being shared with the team.

There was an apparent misunderstanding that ethics approval was required for the delivery of the program, as would be the case for an RCT, versus ethics approval for the collection of evaluation data, which is more appropriate for community program participants.

Effort was made to develop relationships with ethics committees to enhance understanding of the Program and its implementation research approach.

At ethics review, there is a need for the distinction between research-based practice and practise-based research such as program evaluation research.

Evaluation design

Engagement challenges experienced during implementation required changes to inclusion criteria which are avoided in an RCT

In later stages of the program, inclusion criteria were expanded to include healthy weight children in addition to overweight and obese children in an effort to reduce the stigma of participation in a program for overweight/obese children.

Questionnaires were updated to reflect the new criteria and changes in anthropometry needed to be reported separately for healthy weight children and the target population of children above a healthy weight. Data cleaning processes, data analysis syntax and feedback letters to families were tailored as needed.

Make concessions for, and anticipate changes in, evaluation which are necessary when there are responsive changes in delivery of upscaled programs.

Evaluation length and consent process may have been unanticipated and burdensome on participants who signed up for a community program, and not an RCT

Participants enrolled in a community healthy lifestyle program and may not have considered themselves enrolled in a research project (c.f. RCT participants). Correspondingly, the lengthy participant information sheet and associated consent form required by ethics may have impacted participant engagement with evaluation and/or the program.

In addition to consent procedures as required by ethics, evaluation questionnaires were lengthy

An ethics modification was made in order to use data which were collected with implicit consent prior to and at sessions, without a signed consent form.

The instrument for measuring physical activity was changed to a much shorter tool and onerous process evaluation items were omitted from questionnaires when further program changes were out of scope.

The collection of some data for program monitoring without explicit participant consent (analogous to health service performance monitoring) should be considered reasonable and opt-out consent may be suitable for upscaled programs.

Use a ‘minimalist or bare essentials’ lens when designing evaluation.

Data collection

Research conducted in the world has a level of incomplete, unusable, and missing data, which is higher than research in a more tightly controlled RCT setting

Child facilitators conducted the anthropometric measures following training, using standardised equipment and protocols. These facilitators had various backgrounds (e.g. health professionals, teachers, team sport coaches) and some had limited experience in research and taking child measurements and may not have appreciated the implications for data collection. Consequently, there were some inaccuracies.

Despite training and support on evaluation, parent facilitators may have held different perspectives on their role in the Program, particularly if they were operating in a service delivery paradigm rather than practice-based research. Hence, assisting with or ensuring data collection at sessions was not always seen as a priority – establishing rapport with parents was – and thus there was varied engagement with, and completion of evaluation.

A height test to ensure correct assembly of the stadiometer improved the error rate and protocols for handling unreliable anthropometry data were established as part of quality assurance.

All parent and child facilitators were trained, including the importance of data collection and evaluation processes, and the Evaluation Team monitored the return of evaluation data and sent reminders to facilitators to collect outstanding data at sessions or return outstanding questionnaires post-program.

Where anthropometry is a key outcome, consider experienced or accredited personnel (e.g. International Society for the Advancement of Kinanthropometry) to take measures.

Consider central and/or pre-program collection of evaluation data, to reduce burden on facilitators to collect data at early program sessions (especially for large groups).

Program re-enrolments violate rigourous RCT protocols but are optimal in an upscaled program

Families were able to enrol more than once which had a cascade effect as multiple ID numbers were given to the same child where their family re-enrolled. Multiple ID numbers were also administered when parents in a split family were enrolled in two separate groups, but the same child was participating in the program.

Where a child had multiple enrolments and hence multiple study ID numbers, they had to be manually screened and excluded in data analysis so that each child was counted once.

Flexibility to re-enrol in upscaled programs held in the community is desirable, however can lead to duplication of work: resources and time should be allocated to deal with data from these cases to manually exclude duplicates or reconcile sources of data when incomplete data are collected.

Process evaluation

Process evaluation data are nice to have in an RCT but crucial for successful implementation of upscaled programs. Process data are challenging to identify and capture in real time

The Project Implementation Team desired ‘real-time’ feedback from programs to inform decision making during program implementation and were driven by meeting contracted enrolment targets.

Rich program monitoring data were collected from a variety of sources during the project. These data were outside the formal evaluation framework and were not formally captured in databases in situ and analysed for reporting.

The Evaluation Team was able to provide only limited process and outcome data in real-time outside the formal and contracted reporting schedule as data were collected only at program end and data cleaning/analysis processes were time-intensive.

It took considerable time to reconcile data from multiple sources and prepare these data for analysis at the end of the project term.

The identification, systematic capture, and analysis of process evaluation data from a range of sources may be better managed by the project delivery team who are in tune with program challenges and best equipped to respond to real-time feedback by making changes to delivery.

Plan a priori for necessary expertise and budget to collect, manage, analyse and interpret process evaluation data in real time.