Skip to main content

Integrating research and system-wide practice in public health: lessons learnt from Better Start Bradford

Abstract

Many interventions that are delivered within public health services have little evidence of effect. Evaluating interventions that are being delivered as a part of usual practice offers opportunities to improve the evidence base of public health. However, such evaluation is challenging and requires the integration of research into system-wide practice. The Born in Bradford’s Better Start experimental birth cohort offers an opportunity to efficiently evaluate multiple complex community interventions to improve the health, wellbeing and development of children aged 0–3 years. Based on the learning from this programme, this paper offers a pragmatic and practical guide to researchers, public health commissioners and service providers to enable them to integrate research into their everyday practice, thus enabling relevant and robust evaluations within a complex and changing system.

Using the principles of co-production the key challenges of integrating research and practice were identified, and appropriate strategies to overcome these, developed across five key stages: 1) Community and stakeholder engagement; 2) Intervention design; 3) Optimising routinely collected data; 4) Monitoring implementation; and 5) Evaluation. As a result of our learning we have developed comprehensive toolkits (https://borninbradford.nhs.uk/what-we-do/pregnancy-early-years/toolkit/) including: an operational guide through the service design process; an implementation and monitoring guide; and an evaluation framework. The evaluation framework incorporates implementation evaluations to enable understanding of intervention performance in practice, and quasi experimental approaches to infer causal effects in a timely manner. We also offer strategies to harness routinely collected data to enhance the efficiency and affordability of evaluations that are directly relevant to policy and practice.

These strategies and tools will help researchers, commissioners and service providers to work together to evaluate interventions delivered in real-life settings. More importantly, however, we hope that they will support the development of a connected system that empowers practitioners and commissioners to embed innovation and improvement into their own practice, thus enabling them to learn, evaluate and improve their own services.

Peer Review reports

Background

The development of complex public health interventions often takes a top-down approach where researchers design and evaluate interventions without the involvement of those delivering or receiving the interventions. However, service providers’ knowledge of the complex context that they are working in, and participants’ responsiveness, are key elements to an intervention’s effectiveness in real world settings [1]. In contrast, commissioners and service providers often seek out and develop interventions that are potentially relevant to their complex systems, local context and local community needs without consideration of the evidence base. The consequence of such silo working is that many interventions that are currently being delivered within public health services have promise in a real world setting, but little robust evidence of effect [1]. For example, whilst there is robust evidence of the benefits of intervening early in childhood to prevent lifelong physical and psychological morbidity, there are only a small number of evidence based interventions available for delivery [2,3,4].

Evaluating interventions that are being delivered as a part of usual practice offers valuable opportunities to contribute to the evidence base in public health research [1, 5, 6]. Effective interventions are those that are able to recruit and engage participants, be delivered with fidelity in real-life settings, and have a positive impact on one or more key outcomes. Quasi experimental approaches can be employed to infer causal effects of interventions in a timely manner [6, 7], and can be augmented with implementation evaluations, which are crucial for understanding how the intervention performs in practice and in different contexts [8]. The use of routinely collected data for these evaluations offers an efficient method which is both pragmatic and affordable. Its use reduces resources required for data collection, as well as the burden on participants, and allows answers to be considered that are based on outcomes directly relevant to policy and practice [4, 6, 9]. However, such methods are challenging to apply in practice and require the integration of research into system-wide practice.

Numerous initiatives have been implemented that have attempted to integrate research into system-wide practice (e.g. [10, 11]) and the challenges and solutions to such implementation and integration are well described (e.g. [10,11,12,13,14]) including: the need to employ a variety of engagement methods for the local community and key stakeholders; how to identify and align the differing priorities and needs of researchers, service providers and commissioners; how to enhance the quality and accessibility of routine data, and finding ways to conduct pragmatic evaluations to enhance the evidence base.

There are also a number of valuable, well developed frameworks available that provide detailed guides to researchers undertaking the development and evaluation of complex public health interventions [15,16,17]. However, the necessary complexity of these guides, and the requirement for academic input, reduce the likelihood of their adoption within usual public health practice where the choice to simplify, or even ignore, the challenge such guides highlight may appear more manageable [1, 18].

This paper adds to the existing literature by offering strategies, and associated tools, developed to integrate research and practice through the implementation and evaluation of multiple early years interventions delivered by the Better Start Bradford programme and evaluated by the Born in Bradford research programme [19].

These strategies are aimed at supporting public health commissioners and service providers, as well as researchers working in this field, to successfully integrate research and practice within a complex and changing public health system. Our learning aims to support the translation of rigorous academic evaluation methods into the standard development, monitoring and evaluation cycles of community-based public health interventions. In doing so it aims to provide a much needed applied solution to enhance the evidence base of public health interventions that are already being delivered in usual practice.

Methods

Setting

In 2015 the Big Lottery Fund launched “A Better Start” across 5 sites in the UK. The 10 year programme aims to give children the best start in life by offering interventions to pregnant women and children aged 0–3 years. One of the selected sites was Bradford, a socio-economically deprived and ethnically diverse city in the North of England. Within the city the programme is being delivered by Better Start Bradford, a community led partnership involving key organisations delivering children’s services in the area including the National Health Service (NHS), public health, Local Authority, Voluntary and Community sector (VCS) organisations [4]. The Better Start Bradford programme is implementing more than 20 interventions that are delivered by a range of statutory and VCS organisations (Table 1). The lack of a strong evidence base for early life interventions means that the majority of these interventions have been defined in a recent review [2] as ‘science based’ (e.g. developed using the best available evidence, but not tested or proven effective using robust methods of evaluation), rather than ‘evidence based’ (e.g. tested and proven effective using robust study designs) [2].

Table 1 The Better Start Bradford interventions

A unique feature of Better Start Bradford is its partnership with Born in Bradford (BiB), a birth cohort study following 12,500 families. BiB began in 2007 and its remit is to better understand and to improve the health and wellbeing of children in the city and beyond [20, 21]. Together, Better Start Bradford and BiB established the Better Start Bradford Innovation Hub, a centre for monitoring and evaluation of multiple complex interventions within Better Start Bradford. To facilitate this, the Better Start Bradford Innovation Hub has established an innovative experimental birth cohort: Born in Bradford’s Better Start (BiBBS) [4]. BiBBS seeks consent from families living in the Better Start Bradford areas to follow them through linkage to their routine health, education and social care data and to monitor their participation in Better Start Bradford interventions [4]. The Innovation Hub offers an opportunity to efficiently evaluate multiple complex early life interventions through planned controlled experiments and quasi-experimental methods using routinely collected data from partners in health, social care, education, and the interventions themselves, to provide information on baseline characteristics, exposures and outcomes. It also aims to conduct implementation evaluations using best practice, as defined by the Medical Research Council [9], including complementary qualitative work.

Strategy and tool development

Strategies and tools were developed using the principles of co-production (e.g. [22]) defined in this case as working in partnership or ‘with’ key stakeholders including commissioners (Better Start Bradford), implementers (e.g. statutory and VCS Organisations), and service users (community representatives). First, a series of workshops were held with representatives from all groups to identify the key challenges of integrating research and practice, and develop appropriate strategies and tools to overcome these. A summary of these challenges and agreed strategies can be seen in Table 2. Once developed, the first iteration of the strategies and tools were shared at a full day workshop including community representatives, commissioners and service providers from a wide range of health, local authority and VCS organisations in Bradford, as well as academics. Feedback from the workshop was used to refine the strategies and tools. The end result of this process was the production of a series of practical, pragmatic strategies and tools usable by researchers, service providers and commissioners to overcome the challenges of integrating research into public health practice. These strategies are described in detail here and the corresponding tools and templates are available from the BiB website [23].

Table 2 A summary of the challenges, their causes and strategies to resolve them

Findings

Strategies were developed across five key areas of challenge: 1) Community and stakeholder engagement; 2) Intervention design; 3) Optimising routinely collected data; 4) Monitoring implementation; and 5) Evaluation. Whilst these stages are described sequentially, for successful integration of research into practice, the process needs to be ongoing and cyclical, see Fig. 1.

Fig. 1
figure 1

The Better Start Bradford Innovation Hub process of integrating research into practice

Community and stakeholder engagement

Successful community and stakeholder engagement requires their involvement from the beginning, and at all stages of intervention design, delivery, evaluation and dissemination. Establishing a Community Advisory Group (CAG) made up of local people (e.g. local parents/patients, volunteers and local business leaders) will facilitate this integrated involvement. The CAG can be involved at every stage of intervention design and evaluation development including setting evaluation objectives and outputs, developing the wording of surveys, developing information sheets and consent forms, and advising on appropriate methods for engaging with and recruiting local parents. The group can also play a key role in the interpretation and dissemination of findings before they are made public. Alongside a CAG, consultation events and focus groups can be conducted as and when specific guidance from the community is required to shape the work. Having a presence in the local community by attending events, contributing to local newsletters, newspapers, radio programmes and through social media is also a good facilitator.

The support and commitment needed to integrate research into practice goes beyond the obvious research and practice teams, to include senior management, commissioners and data teams within different organisations. The starting point is a careful mapping out of all key stakeholders followed by regular and effective communication, preferably face-to-face. This contact enables all stakeholders to begin to learn how to work together, develop a common language, and gain a shared understanding of the pressures and priorities of all sides that enables shared objectives to be agreed. Working closely with commissioners and stakeholders enables a mutual understanding of the requirements for robust evaluation by researchers, the range of factors that impact on commissioning and de-commissioning decisions including the timelines required for different decisions, and the practical challenges facing service providers. We have also developed an evaluation framework (see “Evaluation” section) in which evaluations are staged to allow short-term evaluations around implementation and trends that can fit into commissioning timelines, ahead of long-term effectiveness findings.

Intervention design

Within Better Start Bradford, each intervention undergoes a service design process involving the commissioner, a provider with expertise in delivering the service, the local community, and researchers. Service design describes the process by which all aspects of an intended intervention or service are specified, from referral and recruitment through to data capture, monitoring and evaluation. It helps to ensure that all parties involved in the intervention or service are clear as to the rationale behind it, how it will be delivered, what resources are required, what the intended outcomes are, and how the intervention or service should be monitored or evaluated.

We have developed a pragmatic operational guide that provides a framework to take an intervention through the service design process in a number of sequential phases [23]. This allows the expectations of all parties to be clear from the outset and ensures that all requirements are considered in a logical order including: specification of the complex components of the intervention; consideration of the practical challenges and service constraints; the needs of the local community; recruitment and referral pathways; identification of measurable and appropriate outcomes through a logic model; and clarification of the data needed to measure these outcomes. Our guide contains a series of templates to help with these processes including: A questionnaire to clarify the components of the intervention; a referral and recruitment pathway; a logic model template; and a minimum data-set to ensure appropriate and meaningful data capture (See Fig. 2 for an example).

Fig. 2
figure 2

An example of the service design toolkit

Optimising the use of routinely collected data

Data quality

To ensure high quality data capture it is vital to work closely with all key stakeholders (e.g. senior managers, practitioners and data specialists), through face to face meetings, training sessions and workshops to highlight the potential importance of the data for evaluation purposes as well as for informing clinical practice. It is also important to work with commissioners to prioritise the collection of key outcome measures in service level specifications thereby ensuring that their completion is prioritised by practitioners. The development of training manuals and protocols for practitioners on how to administer and record key outcomes further enhances the possibility of capturing high quality data. Alongside this, it is also important to work closely with data specialists to ensure that databases are modified to enable capture and reporting of outcomes in a useful and quantifiable way. Such work requires goodwill from providers and partners and/or additional finances to pay for development time. An example of this strategy in practice can be seen in Table 3.

Table 3 Challenges of routine data: An example from maternal mental health data

Valid and meaningful outcome measures

A co-production strategy with key partners can allow subjective measures to be replaced with valid and reliable outcome measures that are appropriate and meaningful to practitioners, participants and researchers. An example of this approach is presented in Table 4 and the steps of the co-production strategy can be seen in Table 5. An additional advantage of this process is that outcome measures can be aligned across organisations and interventions enabling comparison. It is important to consider the impact on data systems as any changes will require input from database specialists and/or software developers.

Table 4 Implementing validated objective outcomes into routine practice
Table 5 Co-production of validated and acceptable outcome measures

Data sharing and linkage

Safe and efficient data sharing between multiple agencies is a key component of any evaluation that relies on routinely collected data. However, ensuring information governance compliance across organisations, particularly in the context of new regulations (e.g. the EU General Data Protection Regulation), is challenging and complex. Completing public consultations about data sharing and developed data sharing agreements between primary and secondary care organisations, health visitors, schools, the local authority, and VCS offers an opportunity to open up conversations. Rather than researchers simply taking data from organisations, such conversations endeavour to use the data to inform practice and priority planning of those organisations that share their information. This, in turn, will further support collaborative working (See Table 6).

Table 6 Using routine data to inform practice and policy

It is also important to spend time with all organisations to ensure that consent processes are acceptable to them and are sufficient to allow their Information Governance and legal teams to authorise the sharing of data. At the same time it is important to ensure that the consent processes are transparent and meaningful to the participating community, ensuring that all participants are fully informed and engaged in the work. To do this, documentation should be developed in collaboration with an established CAG (see “Community and stakeholder engagement” section). Their views and preferences can then be used to encourage pragmatism across organisations. An example of the privacy statement and consent form developed within this programme of work can be found on the Better Start Bradford website [24].

By working closely with intervention practitioners, and by producing a data specification (available as part of our toolkit [23]), the collection of key identifiers can be standardised across interventions /organisations. Encouraging partner organisations to consider adding a shared unique identifier (e.g. health or education numbers) to internal records will simplify matching on a wider scale. Within the Better Start Bradford programme, we are in the process of piloting a shared data system across different organisations to further improve the quality and consistency of data, and facilitate information sharing.

All of the above work to enhance data quality, access and linkage is time consuming and resource intensive in the short-term, but it results in high quality sustainable data resources and these are more efficient than completing additional data collection solely for research purposes.

Monitoring implementation

Ongoing monitoring of performance during the delivery of an intervention is important to ensure that the intervention is being implemented as intended, and thus an evaluation of its effectiveness can ultimately be conducted when the intervention and systems are fully developed and operational. Identifying three key performance indicators (which we term progression criteria) and agreed boundaries that allow performance on these indicators to be rated on a “traffic light” symbol of “Red, Amber, Green” will facilitate this process. It is important that selection of the criteria and boundaries is a shared process including service providers, commissioners and evaluators. Agreed key performance criteria provides a simple way to obtain a regular overview of performance and also allows for early identification of areas of success that can be shared with other interventions, as well as areas of potential concern that can then provide the basis for discussions on support or adaptations as required. By developing these criteria in collaboration with local services there is acceptance across partners when issues are identified and this allows a solution focussed discussion to occur. In-depth description of this process will be published shortly, and an example of the benefits of using progression criteria can be seen in Table 7.

Table 7 An example of the benefits of using progression criteria

Evaluation

Many guidelines about gathering evidence start at the point at which an intervention is ready for evaluation [8, 15] but there is often a lot of work required to get interventions to this point.

We have proposed strategies in our operational guide and toolkit [23] that are relevant to this process. In addition, we have worked closely with our partners and stakeholders to produce a monitoring and evaluation framework that highlights the steps needed to build up the evidence base for an intervention [23]. This framework takes a staged approach to evaluation based upon the logic model for the intervention (activity, input, output, short and long-term outcomes), allowing each stage to be clearly defined as a part of the process. An example of this approach is described in Table 8. The framework sets expectations of all partners by outlining what each step will and will not be able to tell us and what is required to be able to complete that level of evaluation. To support this framework we have adapted an evaluability checklist to aid decisions and set expectations about when an intervention is ready for an effectiveness evaluation. The evaluation framework provides a flexible approach to evaluation, meaning that decisions can be made based on the quality of current evidence, logistical constraints such as the time period that an intervention is commissioned for, ethical constraints and implementation performance.

Table 8 An example of a staged approach to evaluation

Implementation evaluations

Implementation evaluations are important for all public health interventions. This work helps to inform how implementations work in practice and what potential adaptations are needed to improve them. For interventions that are not yet ready for an effectiveness evaluation, this work should focus on exploratory work such as defining the logic model and identifying outcomes. For interventions that are ready for an effectiveness evaluation, the implementation evaluation provides insight into outcome findings. Further details regarding the conceptual framework and justification of use are provided in our evaluation framework [23]. Using data that is collected routinely by service providers as the basis for this approach makes these evaluations efficient, feasible and manageable for providers. Where necessary, and feasible, qualitative methods can be used to add to this data.

Before and after evaluations

For interventions that have an agreed logic model and that use validated outcome measures at the start and end of the intervention, before and after evaluations to estimate the change in outcome(s) are appropriate. These will most likely be short-term outcomes. Although this does not provide scientific evidence that participation in the intervention causes a change in the outcome, it provides an indication as to whether the intervention may work and thus provides some justification for completing a future effectiveness evaluation. Again, by using routinely collected outcome measures, these evaluations can be efficient and the findings directly applicable to practice.

Effectiveness evaluations

For interventions that are ready for an effectiveness evaluation (see Table 8), innovative methods such as cohort multiple Randomised Controlled Trials (also known as Trials within Cohorts) [25] and pragmatic quasi-experimental methods that allow robust, timely and efficient evaluations whilst accommodating the challenges of evaluation in real world settings should be embraced [26, 27]. Further details of our plans for effectiveness evaluations can be found in the BiBBS protocol paper [4].

Conclusion

Improving the evidence base of public health interventions can be achieved efficiently through the integration of research into system-wide practice. To be most effective, evaluations should be done in partnership with all stakeholders, including commissioners, service deliverers and communities that the interventions are intended for. Through this paper, we have shared a range of practical strategies that we have developed to allow the integration of pragmatic research into system-wide practice. We have also provided a number of tools and templates to assist this process. Throughout this paper we have offered case-studies demonstrating our strategies working in practice. Our next step is to obtain an independent evaluation of these strategies and tools.

These strategies and tools will help researchers, commissioners and service providers to work together to evaluate interventions delivered in real-life settings. More importantly, however, we hope that they will support the development of a connected system that empowers practitioners and commissioners to embed innovation and improvement into their own practice, thus enabling them to learn, evaluate and improve their own services. In order to do so, our key recommendations for researchers, commissioners and service providers are:

  • Members of the local community and service providers should be involved at each stage of intervention development and evaluation.

  • Researchers, the local community and stakeholders need to work together and understand each other’s worlds.

  • Use and adapt the toolkits presented here [23] to aid intervention design and ensure the needs of commissioners, providers and evaluators are all considered.

  • Conduct effective and focussed monitoring using progression criteria agreed by commissioners and providers. This will allow early identification of success and/or areas of potential concern that can then result in adaptations to enhance performance.

  • Researchers should harness the use of routine outcome measures in research, and service providers should recognise the value and requirements of their data for evaluation as well as for clinical practice.

  • Implement validated outcome measures through a co-production method to ensure they are valid, feasible and useful in practice within the intended population.

  • Use the evaluation framework presented here [23] to set expectations, ensure that the necessary groundwork is completed and answer important implementation questions before embarking on ambitious effectiveness evaluations.

Abbreviations

BiB:

Born in Bradford

BiBBS:

Born in Bradford’s Better Start

CAG:

Community Advisory Group

MPAS:

Maternal Postnatal Attachment Scale

NHS:

National Health Service

VCS:

Voluntary and Community Sector

References

  1. Hawe P. Lessons from complex interventions to improve health. Annu Rev Public Health. 2015;36:307–23.

    Article  Google Scholar 

  2. Axford N, Barlow J. What works: an overview of the best available evidence on giving children a better start. Dartington: The Social Research Unit at Dartington; 2013.

    Google Scholar 

  3. Hurt L, Paranjothy S, Lucas PJ, et al. Interventions that enhance health services for parents and infants to improve child development and social and emotional well-being in high-income countries: a systematic review. BMJ Open. 2018;8:e014899. https://doi.org/10.1136/bmjopen-2016-014899.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Dickerson J, Bird P, McEachan R, et al. Born in Bradford’s Better Start: an experimental birth cohort study to evaluate the impact of early life interventions. BMC Public Health. 2016;16:711. https://doi.org/10.1186/s12889-016-3318-0.

    Article  PubMed Central  Google Scholar 

  5. Wanless D. Securing good health for the whole population. Final report. London: The Stationery Office; 2004.

    Google Scholar 

  6. Craig P, Cooper C, Gunnell D, et al. Using natural experiments to evaluate population health interventions: new MRC guidance. J Epidemiol Community Health. 2012;66:12.

    Article  Google Scholar 

  7. Petticrew M, Cummins S, Ferrell C, et al. Natural experiments: an underused tool for public health? Public Health. 2005;119:751–7. https://doi.org/10.1016/j.puhe.2004.11.008.

    Article  CAS  PubMed  Google Scholar 

  8. Moore GF, Audrey S, Barker M, et al. Process evaluation of complex interventions: medical research council guidance. BMJ. 2015;350:h1258. https://doi.org/10.1136/bmj.h1258.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Raghupathi W, Raghupathi V. Big data analytics in healthcare: promise and potential. Health Info Sci Syst. 2014;2:3. https://doi.org/10.1186/2047-2501-2-3.

    Article  Google Scholar 

  10. Crow I, France A, Hacking S, Hart M. An evaluation of a long-term pilot scheme of community-based prevention services for young people: Joseph Rowntree Foundation; 2004. https://www.jrf.org.uk/report/evaluation-three-communities-care-demonstration-projects. Accessed 17 Sept 2018

  11. Lantz PM, Viruell-Fuentes E, Israel B, et al. Can communities and academia work together on public health research? Evaluation results from a community based participatory research partnership in Detroit. J Urban Health. 2001;78:495–507.

    Article  CAS  Google Scholar 

  12. Martin G, Ward V, Hendy J, et al. The challenges of evaluating large-scale, multi-partner programmes: the case of NIHR CLAHRCs. Evid Policy. 2011;7:489–509.

    Article  Google Scholar 

  13. Hinchcliff R, Greenfield D, Braithwaite J. Is it worth engaging in multi-stakeholder health services research collaborations? Reflections on key benefits, challenges and enabling mechanisms. Int J Quality Health Care. 2014;26:124–8.

    Article  Google Scholar 

  14. Kemp L, Chavez R, Harris-Roxas B, Burton N. What’s in the box? Issues in evaluating interventions to develop strong and open communities. Commun Dev J. 2008;43:459–69.

    Article  Google Scholar 

  15. Craig P, Dieppe P, Macintyre S, et al. Developing and evaluating complex interventions: the new Medical Research Council guidance. Int J Nurs Stud. 2013;50:587–92.

    Article  Google Scholar 

  16. Gerhardus, A. on behalf of the INTEGRATE-HTA project team. Integrated health technology assessment for evaluating complex technologies (INTEGRATE-HTA): an introduction to the guidances. 2016. Available from: http://www.integrate-hta.eu/downloads/. Accessed 17 Sept 2018.

    Google Scholar 

  17. Eu-Net HTA Core Model. https://www.eunethta.eu/hta-core-model/. Accessed 17 Sept 2018.

  18. Gerhardus A. How to avoid giving the right answers to the wrong questions: the need for integrated assessments of complex health technologies. Int J Tech Ass in Health Care. 2017;33:541–3.

    Article  Google Scholar 

  19. Dickerson J, Bird P, Bryant M, et al. Integrating research and system-wide practice in public health to enhance the evidence-base of interventions: lessons learnt from Better Start Bradford. Lancet. 2018;(Suppl.2):S30. https://doi.org/10.1016/S0140-6736(18)32874-5.

  20. Wright J, Small N, Raynor P, et al. Cohort profile: the born in Bradford multi-ethnic family cohort study. Int J Epidemiol. 2013;42:978–91.

    Article  Google Scholar 

  21. Raynor P, Born in Bradford Collaborative. Born in Bradford, a cohort study of babies born in Bradford, and their parents: protocol for the recruitment phase. BMC Public Health. 2008;8:327. https://doi.org/10.1186/1471-2458-8-327.

    Article  PubMed  PubMed Central  Google Scholar 

  22. King AC, Winter SJ, Sheats JL, et al. Leveraging citizen science and information technology for population physical activity promotion. Transl J Am Coll Sports Med. 2016;1:30–44.

    PubMed  PubMed Central  Google Scholar 

  23. Better Start Bradford Innovation Hub Toolkits. Born in Bradford. https://borninbradford.nhs.uk/what-we-do/pregnancy-early-years/toolkit/. Accessed 17 Sept 2018.

  24. Privacy notices. Better Start Bradford. https://betterstartbradford.org.uk/families-get-involved/our-projects/privacy/. Accessed 17 Sept 2018.

  25. Relton C, Torgerson D, O’Cathain A, Nicholl J. Rethinking pragmatic randomised controlled trials: introducing the “cohort multiple randomised controlled trial” design. BMJ. 2010;340:c1066.

    Article  Google Scholar 

  26. West SG, Duan N, Pequegnat W, et al. Alternatives to the randomized controlled trial. Am J Public Health. 2008;98:1359–66. https://doi.org/10.2105/AJPH.2007.124446.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Craig P, Katikireddi SV, Leyland A, et al. Natural experiments: an overview of methods, approaches, and contributions to public health intervention research. Annu Rev Public Health. 2017;38:39–56. https://doi.org/10.1146/annurev-publhealth-031816-044327.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The integration of research and practice in Bradford has only been possible because of the enthusiasm and commitment of staff and volunteers across children’s services in Bradford. We are grateful to all Born in Bradford staff, the Better Start Bradford staff, all Better Start Bradford project teams, children’s services staff in the NHS, local authority and VCS’ who have supported the integration of research into practice. We are also grateful to all the families taking part in BiBBS and all members of the Community Research Advisory Group.

Funding

This study has received funding through a peer review process from the Big Lottery Fund as part of the A Better Start programme. The Big Lottery Fund have not had any involvement in the design or writing of the study protocol. Authors PB, TB, KP, RM, JW are funded by the NIHR CLAHRC Yorkshire and Humber. www.clahrc-yh.nihr.ac.uk. The views and opinions expressed are those of the author(s), and not necessarily those of the NHS, the NIHR or the Department of Health and Social Care.

Availability of data and materials

Data sharing is not applicable to this article. However, please note that data and samples collected throughout the course of the BiBBS cohort will be available to external researchers and proposals for collaboration will be welcomed. Information on how to access the data can be found at: www.borninbradford.nhs.uk.

Author information

Authors and Affiliations

Authors

Consortia

Contributions

JD, PB, MB, ND, SB, KW, SA, AD, DN, EU, TB, CBC, PS, NS, MH, GT, KP, RM & JW contributed to the design of the study, were involved in drafting this manuscript, approving the final version of this manuscript, and agree to be accountable for this work.

Corresponding author

Correspondence to Josie Dickerson.

Ethics declarations

Ethics approval and consent to participate

The protocol for recruitment into BiBBS, and the collection and use of the BiBBS cohort baseline and routine outcome data and biological samples for the evaluation of Better Start Bradford interventions (including Trials within Cohorts) has been approved by Bradford Leeds NHS Research Ethics Committee (15/YH/0455). BiBBS takes written informed consent from all participants. The Health Research Authority have confirmed that monitoring and implementation evaluations of Better Start Bradford interventions (i.e. evaluations using intervention monitoring data and qualitative work) are service evaluation, not research, and as such do not require review by an NHS Research Ethics Committee (HRA decision 60/88/81).

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dickerson, J., Bird, P.K., Bryant, M. et al. Integrating research and system-wide practice in public health: lessons learnt from Better Start Bradford. BMC Public Health 19, 260 (2019). https://doi.org/10.1186/s12889-019-6554-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12889-019-6554-2

Keywords