Abstract

Background: An intervention’s success depends on how participants interact with it in local settings. Process evaluation examines these interactions, indicating why an intervention was or was not effective, and how it (and similar interventions) can be improved for better contextual fit. This is particularly important for innovative trials like Supporting Policy In health with Research: an Intervention Trial (SPIRIT), where causal mechanisms are poorly understood. SPIRIT was testing a multi-component intervention designed to increase the capacity of health policymakers to use research.

Methods: Our mixed-methods process evaluation sought to explain variation in observed process effects across the six agencies that participated in SPIRIT. Data collection included observations of intervention workshops (n = 59), purposively sampled interviews (n = 76) and participant feedback forms (n = 553). Using a realist approach, data was coded for context-mechanism-process effect configurations (retroductive analysis) by two authors.

Results: Intervention workshops were very well received. There was greater variation of views regarding other aspects of SPIRIT such as data collection, communication and the intervention’s overall value. We identified nine inter-related mechanisms that were crucial for engaging participants in these policy settings: (1) Accepting the premise (agreeing with the study’s assumptions); (2) Self-determination (participative choice); (3) The Value

Proposition (seeing potential gain); (4) ‘Getting good stuff’ (identifying useful ideas, resources or connections); (5) Self-efficacy (believing ‘we can do this!’); (6) Respect (feeling that SPIRIT understands and values one’s work); (7) Confidence (believing in the study’s integrity and validity); (8) Persuasive leadership (authentic and compelling advocacy from leaders); and (9) Strategic insider facilitation (local translation and mediation). These findings were used to develop tentative explanatory propositions and to revise the programme theory.

Conclusion: This paper describes how SPIRIT functioned in six policy agencies, including why strategies that worked well in one site were less effective in others. Findings indicate a complex interaction between participants’ perception of the intervention, shifting contextual factors, and the form that the intervention took in each site. Our propositions provide transferable lessons about contextualised areas of strength and weakness that may be useful in the development and implementation of similar studies.

Keywords

participant perspectives, research utilisation, process evaluation, realist evaluation, health policy

Link to Publisher Version (URL)

https://dx.doi.org/10.1186/s12961-017-0234-4

Find in your library

Share

COinS