Skip to main content
  • Research article
  • Open access
  • Published:

COVID-19 myth-busting: an experimental study

Abstract

Background

COVID-19 misinformation is a danger to public health. A range of formats are used by health campaigns to correct beliefs but data on their effectiveness is limited. We aimed to identify A) whether three commonly used myth-busting formats are effective for correcting COVID-19 myths, immediately and after a delay, and B) which is the most effective.

Methods

We tested whether three common correction formats could reduce beliefs in COVID-19 myths: (i) question-answer, ii) fact-only, (ii) fact-myth. n = 2215 participants (n = 1291 after attrition), UK representative of age and gender, were randomly assigned to one of the three formats. n = 11 myths were acquired from fact-checker websites and piloted to ensure believability. Participants rated myth belief at baseline, were shown correction images (the intervention), and then rated myth beliefs immediately post-intervention and after a delay of at least 6 days. A partial replication, n = 2084 UK representative, was also completed with immediate myth rating only. Analysis used mixed models with participants and myths as random effects.

Results

Myth agreement ratings were significantly lower than baseline for all correction formats, both immediately and after the delay; all β’s > 0.30, p’s < .001. Thus, all formats were effective at lowering beliefs in COVID-19 misinformation.

Correction formats only differed where baseline myth agreement was high, with question-answer and fact-myth more effective than fact-only immediately; β = 0.040, p = .022 (replication set: β = 0.053, p = .0075) and β = − 0.051, p = .0059 (replication set: β = − 0.061, p < .001), respectively. After the delay however, question-answer was more effective than fact-myth, β = 0.040, p =. 031.

Conclusion

Our results imply that COVID-19 myths can be effectively corrected using materials and formats typical of health campaigns. Campaign designers can use our results to choose between correction formats. When myth belief was high, question-answer format was more effective than a fact-only format immediately post-intervention, and after delay, more effective than fact-myth format.

Peer Review reports

Background

The COVID-19 pandemic has spawned an abundance of misinformation [1,2,3,4], described by the World Health Organization (WHO) as an ‘infodemic’ [5]. In many countries, misinformation preceded the outbreak of COVID-19 infections and posed a serious threat to public health [6]. False statements such as “prolonged use of face masks cause health problems” [7], “over 90% of positive COVID-19 tests are false” [7] and “the new COVID-19 vaccine will alter your DNA” [7] reduce compliance with health advice [8] and oblige health teams to compete with science denialism groups. For this reason, the WHO identifies COVID-related misinformation 24 h a day [9] and provides ‘myth-busting’, as do many WHO member countries, such as the UK [10] and Brazil [11], and prominent online platforms (e.g., The Guardian [12], BBC [13], and fact checker websites [7, 14, 15]).

But correcting misinformation is difficult. Misinformation can sway reasoning long after attempts have been made to correct it [16,17,18,19,20,21,22,23]. Health campaigns must therefore optimise their materials to maximise belief change. This requires successfully linking the correction with misinformation in the mind of the reader [24]. Traditionally, myth-busting campaigns have done this explicitly by naming the myth as well as providing a rebuttal (“Myth: the COVID-19 vaccine is mandatory. Fact: the COVID-19 vaccine is not mandatory…”). This approach is used extensively in public health (e.g., influenza [25], smoking [26] and has been applied to COVID-19 myths [7, 13, 14, 27,28,29,30].

However, there have been fears that repeating the myth makes the misinformation more familiar and therefore more likely to be considered true [18]. This phenomenon could lower campaign effectiveness [18], and more recent campaigns, such as those by the WHO, have used approaches that either avoid repeating the myth entirely (fact-only, “The new COVID-19 vaccine will not alter your DNA”) or implicitly link the myth with the correction using a question-answer format (question-answer, “Does the new COVID-19 vaccine alter your DNA? No…”). In contrast to these approaches, recent studies question the need to omit the myth [31,32,33], although current guidance recommends placing the myth after, rather than before, the rebuttal [34]. Indeed, including myths can sometimes have positive effects on belief change [29, 35].

In this study we compared three approaches to myth-busting to establish whether health campaigns might be most effective when they include the myth, omit the myth, or use a question-answer format. We used a randomised trial with a representative sample.

Facts and myths vs only facts

A central question in myth-busting is whether to repeat myths in the myth-busting materials or to present only correcting facts. Early studies suggested that repeating myths had a detrimental effect [18]. It was argued that they risked making the myths more familiar [36], and that they promoted shallow processing of the material [37]. For example, Skurnik, Yoon & Schwarz [38] found that after a 30-min delay, participants in a flu myth-busting condition mistakenly mislabelled myths as facts. They also found that intention to obtain the influenza vaccine was lowered following corrective information that included statements of the myth (a ‘backfire’ effect). Such backfire effects led to advice not to make explicit reference to myths, but present only facts [18, 39].

But recent work presents a more muted conclusion. Familiarity effects have proven elusive [31,32,33] and difficult to replicate [40]. For example, Swire et al. [31] presented participants with a series of true and false claims (myths) that were subsequently affirmed or corrected. They measured the corresponding change in belief and found no evidence of backfire effects at short or long delays, or in older people (whose ability to recall information using strategic memory processes is typically less efficient than younger people’s).

Recent studies have also found advantages for restating the myth during correction, both immediately and after a delay [31, 33, 36, 41]. A limitation of these studies, however, is that they were not designed with Public Health interventions in mind. Instead, they focused on fake news headlines or stories, or general claims that were then fact-checked, e.g., “The national animal of Scotland is the unicorn” (true). For example, the study generally cited to support inclusion of the myth is Ecker, Hogan & Lewandowsky [42], who used a continued influence paradigm modelled on misinformation retraction in news media. Participants read novel news stories (e.g., about a wildfire) that included crucial information (how the fire started) that was later retracted. Retraction that explicitly stated the original information ("It was originally reported that the fire had been deliberately lit, but authorities have now ruled out this possibility. After a full investigation and review of witness reports, authorities have concluded that the fire was set off by lightning strikes") was more effective than retraction that did not ("After a full investigation and review of witness reports, it has been concluded that the fire was set off by lightning strikes"). The materials in health campaigns and social media generally contain much less information than the whole news stories used in continued influence paradigms, are aimed at familiar myths rather than novel news, correct myths after a much longer delay, and have a more diverse audience than Ecker et al.’s participants (n = 60 per condition, Psychology students).

In summary, the prevailing view is that including myths as well as facts is more effective at changing beliefs than including only facts. Nonetheless, the variability in findings, and differences between health campaigns and experimental investigations motivated our dedicated COVID-19 study designed with the specific purpose of providing advice for health campaigns.

Question-answer

Explicitly including the myth in a correction provides a cue that there is a conflict between the facts and pre-existing beliefs. An alternative approach to myth-busting is to use a question format that implicitly cues the myth (“Will the new COVID-19 vaccine alter your DNA? No…”). This format prompts the reader to internally retrieve the answer to the myth question. Conflict is potentially generated between the retrieved and the provided answer and resolved by belief revision.

It is unknown whether implicitly cueing the myth, as in question-answer, produces greater correction of the myth than explicitly doing so. Greater correction might arise because interrogatives yield more engagement or intrinsic motivation than declarative statements [43]. For example, “Will I ….?” motivates more goal directed behaviour than “I will ….”, and rhetorical questions are more effective at encouraging elaborative processing of material than declaratives [37, 44]. On the other hand, implicitly cueing the myth risks the reader failing to access relevant representations. For example, the reader may not expend sufficient processing time to retrieve the correct answer to the question [45]. If this happens, there would be no coactivation of the myth and the correction, and so belief revision would not arise [24].

The question-answer format is currently deployed by the WHO, amongst others, to combat coronavirus misinformation [30]. One study has used this approach, using a WHO infographic to correct the myth that garlic is a cure for coronavirus [46], with mixed results: there was no significant overall effect on misinformation belief, and there was a backfire effect for older adults (55+) (although it was not the authors’ purpose to compare question-answer format to any other approach; they were comparing age groups in UK and Brazil).

Study rationale and outline

In sum, question-answer, fact-only and fact-myth formats are all currently deployed in an attempt to correct COVID-19 misinformation. There are reasons to favour each. Fact-myth presents an explicit link between pre-existing beliefs and corrective material and so may facilitate the detection of conflict, but risks making the myth familiar. Fact-only avoids making the myth more familiar, but risks failing to link pre-existing beliefs and corrective material. Question-answer invites an implicit link between myth and correction that may be more engaging and could yield better recall, but for the same reason it could boost myth familiarity.

To identify which format is most effective at generating belief change in COVID-19, we compared their effectiveness using a randomised trial with a representative UK sample. Participants read myths/facts and appropriate corrections and then answered inference questions testing their agreement with the myths.

There were three between-subject conditions (i) Question-answer, (ii) Fact-only, (iii) Fact-myth (Fig. 1). The materials were designed to be useable and relevant to public health but also follow the most recent advice [34]. When including the myth, traditional public health myth-busting typically present a myth first and then a correction (myth-fact), but the current advice [34] is to place the fact first and then the myth (fact-myth). We followed this advice.

Fig. 1
figure 1

Example correction graphics. There were three correction format conditions: A Question-answer B Fact-only C Fact-myth. Each graphic had two boxes. The first contained the intervention material, the second the supporting explanation statement (and the answer, i.e., yes/no, in the case of question-answer)

Participants were tested prior to correction (baseline) to establish baseline beliefs to act as a (repeated measures) control condition. Participants were then tested immediately after correction (timepoint 1) and after a delay of at least 6 days (timepoint 2). This enabled us to answer the following main research questions:

  1. 1)

    Which formats are effective immediately and after a delay? That is, does each format lower agreement with myths (effective correction), increase agreement (a backfire effect), or neither?

  2. 2)

    What is the most effective myth correction format?

In exploratory work, we also investigated how age interacted with our research questions.

Methods

Ethics information

The project was approved by Cardiff University School of Psychology’s Ethics Committee (EC.19.07.16.5653GR2A4). Participants consented at the beginning of the study and received payment and debrief after participation.

Preregistration

The study was pre-registered at (https://osf.io/huz4q/).

Design and materials

Myth selection

We ran two short surveys to select real-world COVID-19 myths as materials (Table SI.M.1). Together these surveys yielded 11 myths for the main study (see Table 1). The first survey tested a list of 39 myths sourced from the WHO’s COVID-19 myth-busters list [30] and fact checker websites [7, 15]. Myths were included if they had potential to influence readers’ behaviour. For example, the myth that “500 lions were released into the streets to prevent people from leaving their houses during lockdowns in Russia” [7] was not included as it was unlikely to affect behaviour in the UK. The myth that “gargling with salt water can prevent COVID-19” [7] was included as it had the potential to change behaviour. Final myths were selected following discussion and consensus of the team.

Table 1 Myths used in the study

Fifty participants recruited from the online participant panel Prolific [47] rated how much they agreed with each myth, alongside four COVID-19 facts, in a random order, using a pointer on a visual analogue scale from “Strongly disagree (0)” to “Strongly agree (100)”. We selected myths with above 20% average agreement to be included in this study. This process yielded five myths.

We repeated the study with a new set of 18 myths (again those with behavioural relevance) from the WHO [30] and fact checker websites [7, 15, 48, 49] and an additional 50 participants (Prolific). One participant was removed for giving the same response [50] to all questions. Again, we selected all myths with above 20% average agreement, except for one because there was subsequent scientific debate about whether it was partially true (the effects of Vitamin D). This yielded six myths.

Correction graphics

Graphics were designed to conform to current myth-busting advice [34]. Each graphic (Fig. 1) therefore contained source information, including an NHS and COVID-19 logo, and a supporting explanation statement that gave an alternative to the myth (Table SI.M.1). We also included a non-probative image (an image that is related to the claim but does not give extra information about the claim’s veracity), since such images are often included in Public Health Campaigns [50]. The same image was used in each format because engagement can be increased even by non-informative images [51, 52].

Agreement questions

Participants rated their agreement with myths in response to questions that differed in style to the correction graphics to avoid pattern matching between the two (Table SI.M.1). Agreement ratings were made on a six-point Likert scale ranging from “Strongly agree” to “Strongly disagree”. We also included 4 fact statements, to encourage participants to use the full scale (Table SI.M.2).

Catch questions

We used two catch questions to eliminate participants who did not read the questions. Berinksy, Margolis and Sances [53] recommend the use of multiple items to measure attention. The questions we included were “There are seven days in the week” and “The first letter of the alphabet is ‘T’”. Participants answered “True” or “False”.

Demographics questions

Participants were asked about age, education, ethnicity, vaccine concern, vaccine intentions and COVID-19 experiences (Table SI.M.3).

Procedure

Baseline

Participants completed a short set of questions measuring demographic information and personal experiences with COVID-19. They then answered the 17 agreement questions (11 myths, 4 facts, 2 catch trials), in a random order. Participants used a six-point Likert scale.

Intervention

Immediately following the agreement questions, participants were randomly assigned to one of three correction formats (question-answer, fact-only or fact-myth). They then viewed the corresponding 11 correction graphics.

Timepoint 1

Immediately following the correction phase, participants again rated agreement with the 17 statements, in a random order.

Timepoint 2 (delay)

Participants completed timepoint 2 6–20 days later (M = 8.9 days), in which they again rated agreement in a new random order.

Participants

We recruited participants representative for age and gender across the UK, via Qualtrics, an online participant platform. To achieve a representative sample, we applied age and gender quotas. Age 18–24: 12%, 25–34: 19%, 35–44: 18%, 45–54: 20%, 55–64: 17%, 65+: 14%. Gender male: 49%, female: 51%. Power calculations are described in the pre-registration. The main dataset consisted of 2215 participants who completed baseline and timepoint 1, of whom 1329 completed timepoint 2 (an attrition rate of 36%). Of these 38 were excluded for not meeting the minimum age requirement (18 years, n = 2) or for failing the catch trials (n = 36).

Therefore, the n for main analysis was 1291. Of these, 440 participants were randomly assigned to the question-answer condition, 435 to fact-only and 416 to fact-myth. 47% identified as “man”, 52% identified as “woman”. Age ranged from 18 to 89 years; 5% were 18–24 years, 16% were 25–34 years, 18% were 35–44 years, 24% were 45–54 years, 19% were 55–64 years, and 18% were aged above 65 years. 6% identified as Asian, 1.5% as Black, 89.6% as White and 2.9% as Mixed/multiple ethnic groups.

Replication data for timepoint 1

We also collected a partial dataset where timepoint 2 was not collected (due to an error). This data was collected 3 weeks prior to the main dataset (January 2021, the main dataset was collected in February 2021), and we use it to test for replication of the main results for timepoint 1. Two thousand two hundred seventy-five participants were recruited and 191 were excluded for not meeting the inclusion criteria described above. Six hundred ninety-one participants were randomly assigned to the question-answer condition, 687 to fact-only and 704 to fact-myth. 48% identified as “man”, 51% identified as “woman”. Participants ranged in age from 18 to 91 years; 14% 18–24 years, 21% 25–34 years, 19% 35–44 years, 19% 45–54 years, 15% 55–64 years, and 13% above 65 years. 7.7% identified as Asian, 2.2% as Black, 0.3% as Middle Eastern, 86% as White and 2.8% as mixed/multiple ethnic groups. 24% reported they were in a COVID-19 risk group, 6.6% had had a positive COVID-19 test; 8.5% reported they were healthcare workers.

Analysis approach

Linear mixed effect (LME) models were used to analyse the data. Analysis was conducted in R using lme4 [54], lmerTest [55] and lmer_alt() (afex package [56]). Random effects for participants and myths were included in the models, allowing us to generalise across both. Effects are reported as treatment contrasts with reference level according to the reported comparison (e.g., reported effect of question-answer vs fact-myth assumes question-answer as the reference level). p-values were obtained via the Satterthwaite approximation.

We obtained model convergence by starting with a model that had a maximal random effects structure design (as per advice of Barr, Levy, Scheepers, & Tilley [57]), and if that did not converge, removing correlations between intercepts and slopes for myths (see [58, 59]). Model 1 (see below) converged with the maximal random effects structure but Models 2 and 3, which had many more parameters, required suppression of correlations between intercept and slopes. This led to successful model convergence in all cases. Thus, all models included slopes and intercepts for all factors where the design allowed, but not necessarily the correlations between intercepts and slopes.

Even with convergence there remained singularity warnings. We therefore tried simplifying the models by removing further random effects structure. However, this led to models that either failed to converge or were over-simplified (i.e., ignored obvious structure in the data) and consequently risked being anti-conservative (e.g., [57]). Moreover, wherever we obtained a simplified model that both converged and was absent of singularity warnings, significant effects present in the more complex models were also present in the simpler models. We therefore report the results of the most complex models that converged, as described by the models below.

Research question 1

To test whether each correction format lowered agreement scores at each timepoint, we used:

$$Model\;1:Myth\_agreement\sim\;timepoint\;+\;(1+timepoint\vert participant)\;+\;(1+timepoint\vert myth)$$

Where Myth_agreement is the outcome variable, and timepoint is a fixed factor (baseline, timepoint 1, timepoint 2). Random effects (identified to the right of the pipe symbol, |) include intercepts (identified by 1 left of |) and slopes (identified by named factors after 1+), and correlations between the two. Model 1 was applied to each correction format separately (one model to question-answer, one to fact-only etc.)

Research question 2

To compare the correction formats we used:

$$Model\;2:Myth\_agreement\sim correction\ast baseline\ast timepoint\;+\;(1+timepoint\vert participant)\;+\;(1+correction\ast baseline\ast timepoint\parallel myth)$$

Where correction is a fixed factor with three levels (question-answer, fact-only, fact-myth), baseline is a continuous covariate corresponding to baseline scores for each participant and myth, and timepoint is a fixed factor with two levels (timepoint 1 and timepoint 2). The * strings include all main effects and interactions for the listed factors. The model includes all main effects and interactions for fixed and random effects. Correlations between intercepts and slopes were supressed for myth random effects (identified by double pipe, ||, using lmer_alt(); see Analysis approach above).

Baseline was included as a covariate to resolve problems associated with variable degrees of belief in the myths. Myths that were not believed by participants (low baseline scores) could not be corrected (agreement scores lowered) by the intervention, and myths believed too much (high baseline scores) could not exhibit a backfire effect (agreement scores raised). Including baseline as a covariate meant that we could understand effects of the intervention at different levels of baseline belief.

To replicate the results for timepoint 1 with the secondary set of participants, we simply restricted Model 2 to timepoint 1 only:

$$Model\;2(a):Myth\_agreement\sim correction\ast baseline+(1\vert participant)+(1+correction\ast baseline\parallel myth)$$

Results

Research question 1: which formats are effective immediately and after a delay?

Timepoint 1 and timepoint 2 myth agreement ratings were significantly lower than baseline for all correction formats (Fig. 2, see SI. Analysis Table SI.A.1 for means and Table SI.A.2 for model parameters), all β’s > 0.30, SE’s < 0.092, df’s > 11, t’s > 5.95, p’s < .001 (replication set: all β’s < 0.43, SE’s < 0.076, df’s > 11, t’s < − 6.95, p’s < .001). That is to say, each format was effective and did not backfire.

Fig. 2
figure 2

Means of myth agreement ratings (1 denotes low agreement, 6 denotes high agreement) with by-participant standard errors and violin distributions. Ratings were reduced at both timepoints 1 and 2 for all correction formats (question-answer, qa, fact-only, fo, fact-myth, fm) relative to baseline. At timepoint 2, myth agreement was higher than at timepoint 1, but stayed below baseline for all formats

Nonetheless, ratings partially returned towards baseline at timepoint 2, as shown by significant timepoint 1 to timepoint 2 differences, all β’s > 0.18, SE’s < 0.045, df’s > 13, t’s > 4.6, p’s < .001 (although still falling short of baseline).

Research question 2: which is the most effective correction format?

There was no overall difference between correction formats, but there were interactions with baseline agreement and timepoint (Figs. 3 and 4). The main pattern of interest was that differences between formats became evident where the myths were more strongly believed (i.e., where baseline myth agreement was high compared to when it was low; Figs. 3 and 4). These differences are considered in detail below.

Fig. 3
figure 3

Main data set. Means of myth agreement (post-intervention) as a function of baseline agreement (pre-intervention), correction format and timepoint e.g., responses at timepoint 1 in the question-answer condition that were 2 at baseline (pre-intervention) had an average of 1.5 post-intervention. N’s indicate the number of responses in each data point e.g., there were 3505 responses that had baseline 2. No N’s are included for timepoint 2 because the same number of responses were used for timepoint 1 and timepoint 2. Dashed line shows equivalence between baseline and myth agreement (post-intervention) so that data below the line indicates correction. In both timepoints there was a strong positive correlation between baseline agreement and post-intervention agreement (post-intervention agreement was high when baseline agreement was high). Differences between correction formats were more apparent at higher levels of baseline agreement than at lower levels, hence interactions between baseline and correction format. At timepoint 1, no differences between correction formats were visible when baseline was low, but at higher levels fact-only was less effective at lowering agreement than question-answer or fact-myth (p = .022). At timepoint 2, again no differences were visible at low baselines, but fact-myth was less effective than question-answer when baseline was very high (p = .031)

Fig. 4
figure 4

Replication data set. Means of myth agreement (post-intervention) as a function of baseline agreement (pre-intervention) and correction format. Data from replication set. N’s indicate the number of responses in each data point. Dashed line shows equivalence between baseline and myth agreement (post-intervention) so that data below the line indicates correction. Data pattern replicates main data set in that fact-only is less effective than other correction formats at higher baselines

First, for question-answer vs fact-only, there was a marginal interaction with baseline and time, β = 0.032, SE = 0.018, df = 1272, t = 1.79, p = .073, such that differences were greater at higher baselines and at timepoint 1. Simple effects confirmed (and replicated) that question-answer was more effective at reducing myth agreement than fact-only for higher baselines at time point 1, β = 0.040, SE = 0.018, df = 28, p = .022 (replication set: β = 0.053, SE = 0.018, df = 19, p = .0075), but not at timepoint 2, β = 0.0075, SE = 0.019, df = 26, t = 0.39, p = 0.70.

There was also a marginal effect of question-answer vs fact-myth by baseline and time, β = − 0.020, SE = 0.010, df = 5341, t = − 1.93, p = .053, with effects smaller at timepoint 1 than timepoint 2. This was also confirmed by simple effects: there was a significant effect at timepoint 2, β = 0.040, SE = 0.018, df = 28, p = .031, such that question-answer was more effective than fact-myth at higher baselines compared to lower baselines. There was no significant question-answer vs fact-myth by baseline interaction at timepoint 1, β < .001, SE = 0.018, df = 35, t = 0.040, p = 0.97 (replication set: β = − 0.0028, SE = 0.013, df = 24, p = .84).

Finally, there was a fact-only vs fact-myth by baseline and time interaction, β = − 0.038, SE = 0.010, df = 15,990, t = − 3.70, p < .001. This was reflected as a significant simple effect at time 1 for the fact-only vs fact-myth by baseline interaction, β = − 0.051, SE = 0.017, df = 42, t = − 2.91, p = .0059 (replication set: β = − 0.061, SE = 0.016, df = 19, t = − 3.91, p < .001), such that fact-myth was more effective than fact-only at higher baselines than lower baselines. At timepoint 2 the difference between fact-only and fact-myth was no longer significant β = .025, SE = 0.015, df = 23,340, t = 1.63, p = 0.10.

Analysis of timepoint 1 with combined data set

The analyses above used data from the main set that included only those participants who completed both timepoints. This was necessary to allow comparison between timepoint 1 and timepoint 2. However, the consequence was a substantial loss in power when considering timepoint 1 alone (N = 2177 vs N = 1291). Furthermore, the main data set (Fig. 3) and the replication data set (Fig. 4) were analysed separately whereas as a combined analysis would have maximised power. We therefore combined the complete main data set, N = 2177, and the replication set, N = 2084, to yield the largest possible data set (Fig. 5).

Fig. 5
figure 5

Means of myth agreement (post-intervention) as a function of baseline agreement (pre-intervention), correction format and timepoint. Data combined from complete main and replication data set. Dashed line shows equivalence between myth agreement (post-intervention) and baseline. There are interactions of correction format by baseline such that fact-only is less effective than other formats at higher baselines

The results replicated the individual analyses above. There were no main effects but there were interactions with baseline (Fig. 5). For question-answer vs fact-only, there was an interaction with baseline agreement, β = .039, SE = 0.012, df = 16, t = 3.32, p = 0.0044, such that question-answer was more effective at reducing myth agreement than fact-only for higher baselines. Similarly, for fact-myth vs fact-only, there was an interaction with baseline, β = .033, SE = 0.012, df = 16, t = 2.75, p = 0.015, such that fact-myth was more effective at reducing myth agreement than fact-only. There was no interaction of question-answer vs fact-myth by baseline, however, β = 0.0086, SE = 0.011, df = 16, t = 0.75, p = 0.46.

Exploratory questions

We considered the effects of age as a non-preregistered exploratory question (analysis shown in SI.Analysis). Following Vijaykumar et al., we divided participants into an older group (> 55 years old) and a younger group (< 56 years old). Overall, older participants had lower baseline agreement for myths than younger people, consistent with Vijaykumar et al. [46], although correction was effective for all formats and no backfire effects were observed [46]. Analysis of older and younger participants separately showed that while younger participants showed the same correction format effects as those of the main analysis, older participants showed no differences.

Discussion

This study demonstrates that simple, poster-like images, of the style used in public health campaigns, can reduce COVID-19 myth agreement both immediately post-intervention and after a delay. This efficacy applied across a UK representative sample for age and gender, across a range of myths, and was replicated in a partial study. Furthermore, it was present in older and younger people [46].

All formats proved effective at reducing myth agreement. Nonetheless, there were differences between formats where baseline (pre-intervention) myth agreement was high. Immediately post-intervention, question-answer and fact-myth were more effective correction formats than fact-only, and after a delay, question-answer was more effective than fact-myth. We therefore recommend question-answer as the preferred format for myth-busting COVID-19, all else being equal.

No backfire effects

Misinformation researchers have sometimes observed “backfire” effects, whereby attempted correction leads to elevated belief in the myths [60,61,62]. While such effects have not been consistent in myth-busting research [31, 32, 35, 42], backfire was recently observed for older people when attempting to correct a COVID-19 myth about garlic [46] in a similar study to ours. We found that common COVID-19 correction formats did not cause backfire effects, even in older people.

Correction formats

We found no main effect differences between correction formats but there were interactions with baseline agreement. These were such that differences were visible when baseline agreement was high (i.e., only when people believed the myths pre-intervention).

Immediately post-intervention, fact-myth was more effective than fact-only. This is consistent with prior studies [35, 42, 63] demonstrating that reminding participants of misinformation facilitated correction. This could be because restating the myth allows improved coactivation of the myth and the correction [24]. Another possibility is that restating the myth makes the fact more familiar relative to fact-only. Informing people that a proposition is a myth communicates that the negation of the proposition is a fact. For example, the utterance, “Some people incorrectly believe that the COVID-19 vaccine will change your DNA,” is logically equivalent to saying that the COVID-19 vaccine will not change your DNA. Thus, the advantage of fact-myth might arise because the fact is communicated more often than in fact-only.

Question-answer was also more effective than fact-only immediately post-intervention. One potential explanation is that the question-answer image motivated readers to search for a relevant myth, much like an internally motivated myth restatement. However, effects after the delay provide some evidence that this account is incorrect. Here, question-answer was more effective than fact-myth; if question-answer participants benefitted from an internal myth restatement, we should not have observed differences between the external (fact-myth) and internal (question-answer) myth restatement conditions. Note, however, that the statistical differences between question-answer and fact-myth after the delay were weak (p = .03) and differences were only visible at very high baseline scores (Fig. 3). Further evidence is required to confirm the advantages of question-answer at longer intervals.

Another possibility is that the question-answer advantage arose from facilitated retrieval and/or encoding. Retrieval might have been facilitated in a similar way to the testing effect seen in educational settings [64]. In educational research, it is well known that self-testing (questioning oneself about the to-be-learnt material) produces better long-term recall than repeated reading of the to-be-learnt material, one explanation being that testing enhances learning by producing elaboration of existing memory traces and their cue-target relationships [65]. However, it is unclear whether immediately providing the answer, as in the question-answer format used here, is equivalent to providing the answer after a delay, as is typical in educational research.

The discourse structure was different in question-answer than fact-myth or fact-only. This could have contributed to encoding differences. First, question-answer was pragmatically more felicitous than fact-only. Fact-only lacked an obvious “question-under-discussion” [66], a reason why the fact was presented, and so participants were obliged to expend effort in search of one. Second, question-answer provided a clear statement about the veracity of the queried fact. The answer (“yes” or “no”) told participants whether the statement was true and might have acted as a memory “tag” [18, 67]. In other conditions, the veracity of the fact had to be inferred from the experimental context.

In summary, question-answer and fact-myth conferred advantages relative to fact-only immediately post-intervention. After the delay, there was some evidence that question-answer was more effective than fact-only. These findings lead us to recommend question-answer as the preferred format for COVID-19 myth correction campaigns (in contrast to the format used by some current campaigns, e.g., WHO [30], which use fact-only formats). However, it is important to emphasise that the effects of correction format were small compared to the effects of correction more generally (compare differences across correction formats in Fig. 2 with differences between baseline and post intervention), especially after the delay. It is thus better to include correction in any format than to avoid doing so for fear of causing harm with an ineffective myth-busting campaign (see [68] for similar point).

Limitations and future studies

Our study comes with a number of caveats and opportunities for further research. The first relates to the myths we tested. The level of belief in our myths was low overall, around 40% at baseline, which meant that there was only limited room for correction (although much room for backfire effects). The consequence was that the power of the study was reduced relative to a study with more strongly believed myths (e.g., [31]) and this may have contributed to our failure to find differences between some conditions. Nonetheless, the loss in power was accompanied by a gain in validity. The materials we used were genuine COVID-19 myths, recruited from fact-checker websites, rather than the everyday narratives used in continued influence paradigms [42]. The results of this study are therefore more likely to generalise to COVID-19 myth-busting campaigns than if we had used non-COVID materials.

By limiting our materials to myths found in current COVID-19 health information, we not only limited the pre-existing myth belief, we also limited the range of myths. The myths we tested could all be considered rumours [69], in that they were factually verifiable and designed not to inflame political beliefs. Our conclusions are thus limited to these forms of misinformation. Other types of misinformation, such as conspiracy theories, tend to be much more difficult to correct [70] and may respond differently to the correction formats we tested.

The second limitation relates to the degree of engagement with the materials. Our findings are the result of participants reading each correction image when told to do so, independently of whether they found the topic or format engaging. In real health campaigns, people will only process material they are drawn to engage with, and the danger is that engagement and memory will dissociate so it remains possible that, for example, question-answer produces the most effective memory correction, but fact-myth is more engaging.

Relatedly, we did not test the effects of partial engagement. Many readers outside of an experiment will only shallowly process posters or social media content, perhaps just reading the title [71] or initial sentences, or their attention might be divided between reading the correction and other tasks, impairing memory [72] and even the processing of corrections specifically [73]. Correction under these conditions may be weaker than effects reported here [33] and may differ according to correction format. For example, were people to read the question of a question-answer format poster without reading the answer (“Does the COVID-19 vaccine change your DNA?”), the myth may become more familiar than if the question had not been read at all. This would be more likely when the answer was separated from the question by large chunks of intervening material.

Finally, there were subtle differences between the formats we tested and the materials used in health campaigns. These differences may influence the extent to which our findings generalise. The first is that where we included myths, we did so after stating the fact, in line with current guidance. However, health campaigns, fact checkers, and previous studies have often presented the myth first and then the fact [7, 62]. From a partial engagement perspective putting the fact first reduces the probability that the myth is read without the correcting fact, although a recent pre-print [68] found myth-fact to be more effective than fact-myth after a delayed retention. Second, we did not use the word “myth”, as in the traditional myth-busting format (“Myth: The COVID-19 vaccine changes your DNA”). Instead, we used synonymous text strings (“Some people wrongly believe that…”) that fitted better with the structure of the materials and is widely employed in campaigns (e.g., [74]). It is possible that using the more concise, lexical form would be easier for people to process and so lead to greater correction.

Conclusion

Our results imply that COVID-19 myths can be effectively corrected using materials and formats typical of health campaigns [25, 26]. This applies across subgroups for whom backfire effects have previously been observed [46]. Health campaigns can also use our results to select the optimum correction formats. While myth-busting in any of the three formats we tested was effective, question-answer format and fact-myth were more effective than fact-only, and there was some evidence that question-answer was more effective than fact-myth in the longer term. Further research needs to widen the range of myths tested from the verifiable rumours considered here to conspiracy theories [69], and to consider how different formats behave under partial engagement conditions.

Availability of data and materials

The datasets generated and analysed during the current study are available in the Open Science Framework repository, https://osf.io/huz4q/.

Abbreviations

WHO:

World Health Organization

LME:

Linear mixed effect

References

  1. Brennen JS, Simon F, Howard PN, Nielsen RK. Types, sources, and claims of COVID-19 misinformation. Reuters Inst. 2020;7(3):13.

    Google Scholar 

  2. Mian A, Khan S. Coronavirus: the spread of misinformation. BMC Med. 2020;18(1):1–2.

    Article  Google Scholar 

  3. Motta M, Stecula D, Farhart C. How right-leaning media coverage of Covid-19 facilitated the spread of misinformation in the early stages of the pandemic in the U.S. Can Aust J Polit Sci. 2020;53(2):335–42.

    Article  Google Scholar 

  4. Kouzy R, Abi Jaoude J, Kraitem A, El Alam MB, Karam B, Adib E, et al. Coronavirus goes viral: quantifying the COVID-19 misinformation epidemic on Twitter. Cureus. 2020;12(3):4–11.

    Google Scholar 

  5. WHO. Novel Coronavirus (2019-nCoV): situation report – 13. 2020. Available from: https://www.who.int/docs/default-source/coronaviruse/situation-reports/20200202-sitrep-13-ncov-v3.pdf?sfvrsn=195f4010_6. Cited 2020 Nov 16.

    Google Scholar 

  6. Gallotti R, Valle F, Castaldo N, Sacco P, De Domenico M. Assessing the risks of ‘infodemics’ in response to COVID-19 epidemics. Nat Hum Behav. 2020;4(12):1285–93. https://doi.org/10.1038/s41562-020-00994-6.

    Article  PubMed  Google Scholar 

  7. Full Fact. Full Fact fights bad information. fullfact.org. Available from: https://fullfact.org/. Cited 2020 Nov 16.

  8. Freeman D, Waite F, Rosebrock L, Petit A, Causier C, East A, et al. Coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England. Psychol Med. 2020:1–13. https://doi.org/10.1017/S0033291720001890.

  9. WHO. EPI-WIN: WHO information network for epidemics. Available from: https://www.who.int/teams/risk-communication. Cited 2021 May 25.

  10. Cabinet Office Department of Health and Social Care. Government launches Coronavirus Information Service on WhatsApp. GOV.UK. 2020 Available from: https://www.gov.uk/government/news/government-launches-coronavirus-information-service-on-whatsapp. Cited 2021 May 25.

  11. Ricard J, Medeiros J. Using misinformation as a political weapon: COVID-19 and Bolsonaro in Brazil, The Harvard Kennedy School (HKS) Misinformation Review. 2020;1(2):1-6.

  12. Devlin H. Can a face mask protect me from coronavirus? Covid-19 myths busted. theguardian.com. Available from: https://www.theguardian.com/world/2020/apr/11/can-a-face-mask-protect-me-from-coronavirus-covid-19-myths-busted. Cited 2020 Nov 16.

  13. bbc.co.uk. Coronavirus: More health myths to ignore. 2020. Available from: https://www.bbc.co.uk/news/av/52093412. Cited 2020 Nov 16.

  14. Snopes. No Title. snopes.com. Available from: https://snopes.com/. Cited 2020 Nov 16.

  15. AFP Fact Check. No Title. Available from: https://factcheck.afp.com/. Cited 2020 Nov 16.

  16. Chan MP, Jones CR, Hall Jamieson K, Albarracín D. Debunking: a meta-analysis of the psychological efficacy of messages countering misinformation. Psychol Sci. 2017;28(11):1531–46.

    Article  PubMed  PubMed Central  Google Scholar 

  17. Johnson HM, Seifert CM. Sources of the continued influence effect: when misinformation in memory affects later inferences. J Exp Psychol Learn Mem Cogn. 1994;20(6):1420–36.

    Article  Google Scholar 

  18. Lewandowsky S, Ecker UKH, Seifert CM, Schwarz N, Cook J. Misinformation and its correction: continued influence and successful debiasing. Psychol Sci Public Interes. 2012;13(3):106–31.

    Article  Google Scholar 

  19. Paynter J, Luskin-saxby S, Keen D, Fordyce K, Frost G, Imms C, et al. Evaluation of a template for countering misinformation — real-world autism treatment myth debunking. PLoS One. 2019;14(1):1–13.

    Article  Google Scholar 

  20. Rich PR, Zaragoza MS. The continued influence of implied and explicitly stated misinformation in news reports. J Exp Psychol Learn Mem Cogn. 2016;42(1):62–74.

    Article  PubMed  Google Scholar 

  21. Walter N, Tukachinsky R. A meta-analytic examination of the continued influence of misinformation in the face of correction: how powerful is it, why does it happen, and how to stop it? Commun Res. 2020;47(2):155–77.

    Article  Google Scholar 

  22. Wilkes AL, Leatherbarrow M. Editing episodic memory following the identification of error. Q J Exp Psychol Sect A. 1988;40(2):361–87.

    Article  Google Scholar 

  23. Hviid A, Vinsløv Hansen J, Frisch M, Melbye M. Measles, mumps, rubella vaccination and autism: a nationwide cohort study. Ann Intern Med. 2019;170(8):513–20.

    Article  PubMed  Google Scholar 

  24. Kendeou P, Walsh EK, Smith ER, O’Brien EJ. Knowledge revision processes in refutation texts. Discourse Process. 2014;51(5-6):374–97. https://doi.org/10.1080/0163853X.2014.913961 Taylor & Francis.

    Article  Google Scholar 

  25. WHO. 5 myths about the flu vaccine. Available from: https://www.who.int/influenza/spotlight/5-myths-about-the-flu-vaccine. Cited 2020 Nov 16.

  26. NHS. 10 myths about stop smoking treatments. 2018.

    Google Scholar 

  27. King’s College London. The ten most dangerous coronavirus myths debunked. 2020. Available from: https://www.kcl.ac.uk/blog-the-ten-most-dangerous-coronavirus-myths-debunked-1. Cited 2020 Nov 16.

    Google Scholar 

  28. Roper M. Top 17 coronavirus myths debunked - from face masks to hand sanitiser: mirror.co.uk; 2020. Available from: https://www.mirror.co.uk/news/uk-news/top-17-coronavirus-myths-debunked-21704101. Cited 2020 Nov 16

    Google Scholar 

  29. Reynolds M, Weiss S. Does alcohol kill coronavirus? The biggest myths, busted: wired.co.uk; 2020. Available from: https://www.wired.co.uk/article/alcohol-kills-coronavirus-myth-busting. Cited 2020 Nov 16

    Google Scholar 

  30. WHO. Coronavirus disease (COVID-19) advice for the public: Mythbusters. www.who.int. Available from: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/advice-for-public/myth-busters. Cited 2020 Nov 16.

  31. Swire B, Ecker UKH, Lewandowsky S. The role of familiarity in correcting inaccurate information. J Exp Psychol Learn Mem Cogn. 2017;43(12):1948–61.

    Article  PubMed  Google Scholar 

  32. Swire-Thompson B, DeGutis J, Lazer D. Searching for the backfire effect: measurement and design considerations. J Appl Res Mem Cogn. 2020;9(3):286–99.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Ecker UKH, Lewandowsky S, Chadwick M. Can corrections spread misinformation to new audiences? Testing for the elusive familiarity backfire effect. Cogn Res Princ Implic. 2020;5(1):1–25.

    Google Scholar 

  34. Lewandowsky S, Cook J, Ecker UKH, Albarracín D, Amazeen MA, Kendeou P, et al. The Debunking handbook 2020. 2020. Available from: https://sks.to/db2020. Cited 2020 Nov 16.

    Google Scholar 

  35. Wahlheim CN, Alexander TR, Peske CD. Reminders of everyday misinformation statements can enhance memory for and beliefs in corrections of those statements in the short term. Psychol Sci. 2020;31(10):1325–39. https://doi.org/10.1177/0956797620952797.

    Article  PubMed  Google Scholar 

  36. Skurnik I, Yoon C, Schwarz N. “Myths & Facts” about the flu: health education campaigns can reduce vaccination intentions. 2007. Unpubl Manuscr available from http://webuser.bus.umich.edu/yoonc/research/Papers/Skurnik_Yoon_Schwarz_2005_Myths_Facts_Flu_Health_Education_Campaigns_JAMA.pdf.

    Google Scholar 

  37. Yeh MA, Jewell RD. The myth/fact message frame and persuasion in advertising: enhancing attitudes toward the mentally ill. J Advert. 2015;44(2):161–72.

    Article  Google Scholar 

  38. Skurnik I, Yoon C, Park DC, Schwarz N. How warnings about false claims become recommendations. J Consum Res. 2005;31(4):713–24 Available from: https://academic.oup.com/jcr/article-lookup/doi/10.1086/426605.

    Article  Google Scholar 

  39. Cook J, Lewandowsky S. The debunking handbook. St. Lucia: University of Queensland; 2011.

    Google Scholar 

  40. Haglin K. The limitations of the backfire effect. Res Polit. 2017;4(3):0–4.

    Google Scholar 

  41. Ecker UKH, Reilly ZO, Reid JS, Chang EP. The effectiveness of short - format refutational fact - checks. Br J Psychol. 2019;111(1):36–54.

  42. Ecker UKH, Hogan JL, Lewandowsky S. Reminders and repetition of misinformation: helping or hindering its retraction? J Appl Res Mem Cogn. 2017;6(2):185–92.

    Article  Google Scholar 

  43. Senay I, Albarracín D, Noguchi K. Motivating goal-directed behavior through introspective self-talk: the role of the interrogative form of simple future tense. Psychol Sci. 2010;21(4):499–504.

    Article  PubMed  Google Scholar 

  44. Petty RE, Cacioppo JT, Heesacker M. Effects of rhetorical questions on persuasion: a cognitive response analysis. J Pers Soc Psychol. 1981;40(3):432.

    Article  Google Scholar 

  45. Bott L, Rees A, Frisson S. The time course of familiar metonymy. J Exp Psychol Learn Mem Cogn. 2016;42(7):1160–70.

    Article  PubMed  Google Scholar 

  46. Vijaykumar S, Jin Y, Rogerson D, Lu X, Sharma S, Maughan A, et al. How shades of truth and age affect responses to COVID-19 (Mis)information: randomized survey experiment among WhatsApp users in UK and Brazil. Humanit Soc Sci Commun. 2021;8(1):1–12.

    Article  Google Scholar 

  47. Prolific. No Title. Available from: www.prolific.co. Cited 2020 Nov 16.

  48. BBC Reality Check. No Title. Available from: https://www.bbc.co.uk/news/reality_check. Cited 2020 Nov 16.

  49. FactCheck.org. A Project of the Annenberg Public Policy Center. Available from: https://www.factcheck.org. Cited 2020 Nov 16.

  50. England PH. NHS Materials for Hospitals, GPs, Pharmacies and other NHS Settings. Available from: https://coronavirusresources.phe.gov.uk/nhs-resources-facilities/resources/. Cited 2021 Sep 30.

  51. Fenn E, Ramsay N, Kantner J, Pezdek K, Abed E. Nonprobative photos increase truth, like, and share judgments in a simulated social media environment. J Appl Res Mem Cogn. 2019;8(2):131–8. https://doi.org/10.1016/j.jarmac.2019.04.005.

    Article  Google Scholar 

  52. Newman EJ, Garry M, Bernstein DM, Kantner J, Lindsay DS. Nonprobative photographs (or words) inflate truthiness. Psychon Bull Rev. 2012;19(5):969–74.

    Article  PubMed  Google Scholar 

  53. Berinsky AJ, Margolis MF, Sances MW. Separating the shirkers from the workers? Making sure respondents pay attention on self-administered surveys. Am J Pol Sci. 2014;58(3):739–53.

    Article  Google Scholar 

  54. Bates D, Mächler M, Bolker BM, Walker SC. Fitting linear mixed-effects models using lme4. J Stat Softw. 2015;67(1):1–48.

    Article  Google Scholar 

  55. Kuznetsova A, Brockhoff PB, Christensen RHB. lmerTest package: tests in linear mixed effects models. J Stat Softw. 2017;82(1):1–26.

    Google Scholar 

  56. Singmann H, Bolker B, Westfall J, Aust F, Ben-Schachar MS, Højsgaard S, et al. Package ‘afex’ R topics documented; 2021. p. 1–76. Available from: http://afex.singmann.science/, https://github.com/singmann/afex

    Google Scholar 

  57. Barr DJ, Levy R, Scheepers C, Tily HJ. Random effects structure for confirmatory hypothesis testing: keep it maximal. J Mem Lang. 2013;68(3):255–78. https://doi.org/10.1016/j.jml.2012.11.001.

    Article  Google Scholar 

  58. Bates D, Kliegl R, Vasishth S, Baayen H. Parsimonious mixed models. arXiv preprint arXiv:1506.04967; 2015.

    Google Scholar 

  59. Singmann H, Kellen D. An introduction to mixed models for experimental psychology. In New methods in cognitive psychology. 2019:4–31. Routledge.

  60. Gigerenzer G, Gaissmaier W, Kurz-Milcke E, Schwartz LM, Woloshin S. Helping doctors and patients make sense of health statistics. Psychol Sci Public Interes Suppl. 2007;8(2):53–96.

    Article  Google Scholar 

  61. Pluviano S, Watt C, Della SS. Misinformation lingers in memory: failure of three pro-vaccination strategies. PLoS One. 2017;12(7):1–15.

    Article  Google Scholar 

  62. Pluviano S, Watt C, Ragazzini G, Della Sala S. Parents’ beliefs in misinformation about vaccines are strengthened by pro-vaccine campaigns. Cogn Process. 2019;20(3):325–31.

    Article  PubMed  Google Scholar 

  63. Winters M, Oppenheim B, Sengeh P, Jalloh MB, Webber N, Pratt SA, et al. Debunking highly prevalent health misinformation using audio dramas delivered by WhatsApp: evidence from a randomised controlled trial in Sierra Leone. BMJ Glob Health. 2021;6(11):e006954.

    Article  PubMed  PubMed Central  Google Scholar 

  64. Roediger HL, Karpicke JD. Test-enhanced learning: taking memory tests improves long-term retention. Psychol Sci. 2006;17(3):249–55.

    Article  PubMed  Google Scholar 

  65. McDaniel MA, Masson MEJ. Altering memory representations through retrieval. J Exp Psychol Learn Mem Cogn. 1985;11(2):371–85.

    Article  Google Scholar 

  66. Roberts C. Information structure in discourse: towards an integrated formal theory of pragmatics. Ohio State Univ Work Pap Linguist. 1996;49:91–136.

    Google Scholar 

  67. Mayo R, Schul Y, Burnstein E. “I am not guilty” vs “I am innocent”: successful negation may depend on the schema used for its encoding. J Exp Soc Psychol. 2004;40(4):433–49.

    Article  Google Scholar 

  68. Swire-Thompson B, Cook J, Butler L, Sanderson J, Lewandowsky S, Ecker UKH. Correction format has a limited role when debunking misinformation; 2021. https://doi.org/10.31234/osf.io/gwxe4.

    Book  Google Scholar 

  69. Islam MS, Kamal AHM, Kabir A, Southern DL, Khan SH, Murshid Hasan SM, et al. COVID-19 vaccine rumors and conspiracy theories: the need for cognitive inoculation against misinformation to improve vaccine adherence. PLoS One. 2021;16(5):1–17. https://doi.org/10.1371/journal.pone.0251605.

    Article  CAS  Google Scholar 

  70. Lewandowsky S. Conspiracist cognition: chaos, convenience, and cause for concern. J Cult Res. 2021;25(1):12–35.

    Article  Google Scholar 

  71. Gabielkov M, Ramachandran A, Chaintreau A, Legout A. Social clicks: what and who gets read on twitter? Perform Eval Rev. 2016;44(1):179–92.

    Article  Google Scholar 

  72. Craik FIM, Naveh-Benjamin M, Govoni R, Anderson ND. The effects of divided attention on encoding and retrieval processes in human memory. J Exp Psychol Gen. 1996;125(2):159–80.

    Article  CAS  PubMed  Google Scholar 

  73. Ecker UKH, Lewandowsky S, Tang DTW. Explicit warnings reduce but do not eliminate the continued influence of misinformation. Mem Cogn. 2010;38(8):1087–100.

    Article  Google Scholar 

  74. Schraer R, Lawrie E. Coronavirus: scientists brand 5G claims ‘complete rubbish’: BBC News; 2021. Available at: https://www.bbc.co.uk/news/52168096. Accessed 23 Oct 2021

    Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

Economic and Social Research Council Impact Acceleration Account (ESRC IAA) award “Optimisation of COVID-19 myth-busting materials” awarded to LB and PS. A.C. was supported by a KESS 2 Studentship awarded to LB and PS. It is part funded by the Welsh Government’s European Social Fund (ESF) and part funded by Aneurin Bevan University Health Board. The funders had no role in the design of the study, data collection, analysis, interpretation of data or writing the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

A.C. designed the study, conducted the study, analysed the data, and wrote/edited the manuscript. P.S. designed the study and wrote/edited the manuscript. L.B. designed the study, conducted the study, analysed the data, and wrote/edited the manuscript. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Aimée Challenger.

Ethics declarations

Ethics approval and consent to participate

The project was approved by Cardiff University School of Psychology’s Ethics Committee (EC.19.07.16.5653GR2A4). Participants consented at the beginning at the study by ticking the “I consent to taking part in this study” box before they were able to participate.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Supplementary Information: Methods.

Additional file 2.

Supplementary Information: Analysis.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Challenger, A., Sumner, P. & Bott, L. COVID-19 myth-busting: an experimental study. BMC Public Health 22, 131 (2022). https://doi.org/10.1186/s12889-021-12464-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12889-021-12464-3

Keywords