Skip to main content

Quality of web-based information about the coronavirus disease 2019: a rapid systematic review of infodemiology studies published during the first year of the pandemic

Abstract

Background

Following the outbreak of the coronavirus disease 2019, adequate public information was of outmost importance. The public used the Web extensively to read information about the pandemic, which placed significant responsibility in, for many, an unfamiliar situation as the disease spread across the globe. The aim of this review was to synthesize the quality of web-based information concerning the coronavirus disease 2019 published during the first year of the pandemic.

Materials and methods

A rapid systematic review was undertaken by searching five electronic databases (CINAHL, Communication & Mass Media Complete, PsycINFO, PubMed, Scopus). Empirical infodemiology reports assessing quality of information were included (n = 22). Methodological quality and risk of bias was appraised with tools modified from previous research, while quality assessment scores were synthesized with descriptive statistics. Topics illustrating comprehensiveness were categorized with content analysis.

Results

The included reports assessed text-based content (n = 13) and videos (n = 9). Most were rated good overall methodological quality (n = 17). In total, the reports evaluated 2,654 websites or videos and utilized 46 assessors. The majority of the reports concluded that websites and videos had poor quality (n = 20). Collectively, readability levels exceeded the recommended sixth grade level. There were large variations in ranges of the reported mean or median quality scores, with 13 of 15 total sample scores being classified as poor or moderate quality. Four studies reported that ≥ 28% of websites contained inaccurate statements. There were large variations in prevalence for the six categories illustrating comprehensiveness.

Conclusion

The results highlight quality deficits of web-based information about COVID-19 published during the first year of the pandemic, suggesting a high probability that this hindered the general population from being adequately informed when faced with the new and unfamiliar situation. Future research should address the highlighted quality deficits, identify methods that aid citizens in their information retrieval, and identify interventions that aim to improve the quality of information in the online landscape.

Peer Review reports

Background

The rates of infection in the coronavirus disease 2019 (COVID-19) caused by SARS-CoV-2 escalated rapidly following the outbreak in 2019 [1]. The disease has caused considerable morbidity and mortality during its first year, particularly among those of higher ages and with predisposing conditions [2, 3]. Consequently, this significant threat to public health required rapid implementation of a wide range of preventive measures within the first year of the pandemic, with the purpose to mitigate infectious spread and impact mainly through behavioral changes in the general population [4, 5]. In order to reach high public adherence to recommended preventive measures rapidly implemented during the first year, successful dissemination of high-quality accurate information was necessary [6]. There was a high public demand for information about COVID-19 in the initial period of the pandemic and many members of the general population used the Web to search for information about this topic [7, 8]. Indeed, the Web has a potential to disseminate accessible and tailored information [9], possibly acting as a large and useful source of information during an epidemic or pandemic.

The Internet is an immense platform of information, encompassing vast volumes of health-related content that is constantly growing and most of which is freely accessible for the public. With no standard systematic system in place to ensure what is being published online, the literature acknowledges a substantial risk of encountering information of substandard quality when browsing the Web [9,10,11,12]. In order to enhance the knowledge of what information the public encounters when accessing the Web, an increasing amount of researchers utilizes methods to assess its quality [11,12,13]. One aspect of the field supply-based infodemiology concerns systematic methods of evaluating the information that is published on the Web [14]. Studies in various medical fields have consistently indicated that a large majority of websites have substandard quality [10,11,12], illustrating a problematic situation since the Internet is heavily used by the general population as a source of health-related information [15]. Epidemics and pandemics involve a particular circumstance, since a large proportion of the general population is tasked with sorting through a considerable flow of online information on their own. This process is challenging and involves a high risk of widespread promulgation of misinformation and conspiracy theories, often referred to as being an infodemic [16]. Recently, the importance of measuring the impact of infodemics during health emergencies and understanding the spread of low-quality information in public health research has been emphasized further [17].

Since the first year of the pandemic, the public health scenario has involved new challenges as well as opportunities, in particular through the emergence of variants of the virus and a widespread introduction of vaccines. Nevertheless, the first year of the pandemic will undoubtedly for many citizens be remembered as a frightening and unfamiliar situation in which they were required to apply health-related information in new and challenging ways. To learn how information dissemination can be improved in future health emergencies involving communicable diseases, researchers, health professionals and other stakeholders need to consider the potential issues that emerged during the first phase of the COVID-19 pandemic. While the importance of disseminating high-quality information for the public during an epidemic or pandemic is unquestionable, little is yet known about the quality of web-based information about COVID-19. Thus, the aim of this rapid review was to synthesize the evidence on the quality of web-based information concerning the coronavirus disease 2019, intended for the general population and published during the first year of the pandemic.

Methods

Design

To meet the need for evidence synthesis due to the pandemic outbreak of COVID-19, a rapid systematic review was undertaken. Rapid reviews are more structured than literature or narrative reviews and involve components of a systematic review, but with degrees of simplified or omitted steps with the intention to produce evidence in a timely manner [18].

Search methods

Five electronic databases were used to search for published original articles: (i) CINAHL, (ii) Communication & Mass Media Complete, (iii) PsycINFO, (iv) PubMed and (v) Scopus. The searches were performed 11 December 2020, one year after the first confirmed outbreak. Relevant search terms were identified via the following database vocabularies for indexation: CINAHL Subject Headings, Medical Subject Headings, and PsycINFO subjects. Additional search terms were inspired from a review investigating criteria for quality evaluation of online health information [19]. When applicable, truncation and Boolean operators were used. Details concerning the searches are presented in Additional File 1.

Reports were included based on the following criteria: (i) observational empirical infodemiology study investigating the quality of web-based information about COVID-19 intended for public audiences, (ii) published in 2019 or 2020, (iii) systematic quality assessments of web-based information, and (iv) published in the English language. Reports were excluded if investigating: (i) social media, (ii) peer support communication, (iii) information intended for non-public audiences (e.g. health professionals or stakeholders), and (iv) exclusively investigating news articles, since the aim was not to evaluate these sources as single items containing information about COVID-19. Abstracts, letters, editorials, comments and single case studies were excluded.

In total, 4,803 hits were returned from the searches in the databases and 2,044 of these hits were duplicates. After screening the remaining titles and abstracts, 2,714 hits were excluded and thus 45 reports were read in full to assess eligibility. Among these, 23 were excluded after reading the full text document, resulting in 22 reports included in the review. The last author performed the screening and eligibility assessment of titles, abstracts and full-text documents. Figure 1 presents the searches, screening and eligibility assessment. The identified reports were imported to the citation organization software Zotero (version 5.0.96) [20] and the process was managed with the aid of the web-application Rayyan QCRI [21]. No automation tools were utilized during the searches, screening or eligibility assessment of reports.

Fig. 1
figure 1

Flowchart of the searches performed in the electronic databases

Methodological appraisal and risk of bias assessment

The appraisal of the methodological reporting and risk of bias in the included reports was conducted with a pre-specified tool inspired by a previous review of consumer-oriented health information on the Web [10] and the NIH Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies [22], which has been used in a previous review investigating quality of online information [11]. As of yet, no widely established instrument for the systematic assessment of bias in empirical studies investigating quality of websites exists. Therefore, the authors modified the aforementioned instruments to fit the context and inquiry of supply-based infodemiology, and the full appraisal instrument is presented in Additional File 2. The last author performed the quality appraisal and the first author scrutinized the initial appraisal to check for reviewer consistency. Any disagreements were settled through discussion between the two authors until consensus was reached.

Data extraction and synthesis

The extraction and synthesis was based on the following structure, according to definitions presented in a previous review on quality criteria [19]; (i) readability (‘whether information is presented in a form that is easy to read’), (ii) quality assessments with systematic instruments, (iii) accuracy (’whether a source or information is consistent with agreed-upon scientific findings’), and (iv) comprehensiveness (‘whether a source or information covers a wide range of topics’) or completeness (‘whether necessary or expected aspects of a subject/topic are provided’). The last author developed a data extraction form, inspired by a previous review investigating the quality of online health information for patients and the general population [11]. In regard to quality assessment instruments, total sample and subsample scores were extracted and analyzed with descriptive statistics. Quality assessment scores were determined by calculating the percentage of the total score of the quality assessment instruments (i.e., mean or median score divided by maximum achievable score of the scale/instrument and multiplied by 100). In accordance with previous work [11], the quality assessments were classified as poor (< 44%), moderate (44–80%) and excellent (> 80%). In regard to readability, classification of grade-level scores were based on recommendations from the Joint Commision, stating that patient education materials should be written < 6th grade-level [23], corresponding to > 80 Flesch Reading Ease (FRE) score [24]. In regard to comprehensiveness or completeness, the reported topics covered in the websites were categorized with inductive content analysis by collating the reported topics into categories and subcategories, defined as collections of topics sharing an internally homogeneous and externally heterogeneous content [25]. The reported prevalence of the categorized topics were then extracted and analyzed with descriptive statistics. Lastly, the overall conclusion of each included publication was judged as either (i) poor quality with quality improvements needed, (ii) moderate or varied quality, and (iii) good or excellent quality with no quality improvements needed. RStudio was used to calculate descriptive statistics. The last author performed the data extraction, synthesis and analysis. No assumptions were made in regard to missing or unclear information. When a report only presented quality scores for a certain subset of included websites or videos, this score was considered a total sample score. Scores in studies reporting results from the same samples were omitted in the analysis. The data extracted from the included studies are attached as an Additional File.

Results

Methodological appraisal and risk of bias

The median number of adhered methodological quality benchmarks was 5/9 for search process, 2.75/6 for assessment process and 5/7 for the modified NIH assessment tool (Table 1). The overall quality rating of the studies was judged as good (n = 17 studies) and fair (n = 5 studies). The included studies investigated text-based content (n = 13) and videos (n = 9), evaluating a total of 2,654 websites and videos (Median = 107.5, Range = 18–321) identified with the search engines Google (n = 12 studies), Youtube (n = 9 studies), Bing (n = 1 study) and Yahoo (n = 1 study). Languages assessed in the studies were English (n = 17 studies), Spanish (n = 6 studies), Mandarin (n = 1 study), Korean (n = 1 study) and Turkish (n = 1 study). Combined, the studies utilized a total of 46 reported assessors (Median = 2, Range = 2–7). In total, assessor qualification was not reported for 33 of the assessors who evaluated the websites or videos, while studies reporting assessor qualification utilized MD with MSc or PhD (n = 7 assessors), medical students/trainees (n = 3 assessors), assessor with a Master of Science (n = 2 assessors), and EdD with MPH (n = 2 assessors). Two studies did not report the number of assessors and four exclusively investigated automated readability calculations with no need for quality assessors. Additional File 3 presents the methodological details of the included studies.

Table 1 Methodological characteristics of the included reports (n = 22)

Results of data extraction and synthesis

Conclusions about website quality

The majority of the studies concluded that websites or videos about COVID-19 had poor quality with quality improvements needed (n = 19) [26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44]. Three studies concluded that websites or videos had moderate or varied quality [45,46,47] and none of the included studies concluded that websites or videos had good or excellent quality with no quality improvements needed.

Readability

Readability of written information was evaluated with seven grade-level readability formulas (n = 5 reports evaluating n = 694 websites) [26,27,28,29,30] and FRE (n = 4 reports evaluating n = 485 websites) [26,27,28, 31]. The reported mean or median total sample and subsample scores illustrated a readability exceeding the recommended grade (range = 8.7–14.3) [26,27,28,29,30] and FRE (range = 44.1–54.1) [26,27,28, 31] levels (Table 2 and Fig. 2). Two reports reported varied prevalence of infographics (7% in one study[30] and 75% in another study [31]). According to one report, the option of viewing similar information in alternative languages was noted in 3% of websites [30].

Table 2 The total sample and subsample mean or median quality scores for the included reports (some only reported total sample or subsample score)
Fig. 2
figure 2

Mean and median readability total sample and subsample scores extracted from the included reports

Quality assessments with systematic instruments

Ten reports assessed quality using a total of nine instruments (Table 2) [29, 31,32,33,34,35,36,37, 45, 46]. The mean or median total sample scores ranged from 20–100% (Table 2), with 13 out of 15 reported scores classified as poor or moderate quality (poor quality: n = 4 reports evaluating n = 458 websites with n = 4 instruments [29, 32, 35, 45]; moderate quality: n = 3 reports evaluating n = 471 websites with n = 5 instruments [31, 34, 45]; excellent quality: n = 2 reports evaluating n = 211 websites with n = 2 instruments [29, 45]) (Fig. 3). Mean or median subsample scores ranged between 0–100% (Table 2), with most scores classified as poor or moderate quality (Fig. 3). Two reports investigated the Health on the Net Foundation Code of Conduct certification and found that ≤ 18% of websites were accredited [31, 32].

Fig. 3
figure 3

Mean and median quality total sample and subsample scores extracted from the included reports

The utilized instruments in the reports evaluated the following quality criteria: actionability, authorship, attribution, content (including the COVID-19 specific content prevalence, transmission, signs/symptoms, screening/testing, and treatment/outcome), currency, disclosure, ease of use, flow of information, identification, reliability, sensationalist style, structure, understandability, usability, usefulness, and quality of information about treatment options (Additional File 4).

Accuracy assessments

Six reports assessed accuracy by comparing information against current scientific literature or guidelines from health agencies [32, 37, 43, 45,46,47]. Four reports presented that ≥ 28% of websites contained inaccurate statements (range 58%-28%) [32, 37, 43, 45], while two presented that < 10% of websites contained inaccuracies [46, 47]. One report presented that 11% of the investigated videos included information categorized as hoaxes [40]. Sources with high prevalence or likelihood of inaccurate statements were published by independent users or consumers [45,46,47] and news [37], while sources with low prevalence or likelihood of inaccurate statements were published by government [37, 45], health care [37] and news [47].

Comprehensiveness and completeness assessments

There were large variations in the reported prevalence in all of the six identified categories: general information (range = 12–86%) [29, 31, 35, 36, 38, 43, 45,46,47], prevention (range = 2–95%) [29, 31, 34,35,36, 38,39,40,41,42,43,44,45,46,47], risk groups (range = 8–77%) [29, 31, 35], symptoms (range = 25–98%) [29, 31, 35, 36, 38, 45,46,47], testing (range = 5–98%) [29, 31, 35, 36, 43, 45,46,47] and treatment (range = 8–97%) [29, 31, 34,35,36, 38, 41, 42, 45,46,47] (Fig. 4). Additional File 5 shows a detailed presentation on the content and prevalence of each identified category and subcategory.

Fig. 4
figure 4

Categories and subcategories extracted from the included reports investigating comprehensiveness

Discussion

In this rapid systematic review, 22 reports investigating web-based information during the first year of the COVID-19 pandemic were summarized and synthesized. The methodological appraisal and risk of bias assessment revealed fair to good standards of reporting. The majority of included reports concluded poor quality with quality improvements needed.

In line with previous reviews investigating readability of online health information [12, 48], readability was uniformly determined as exceeding the recommended levels. Health literacy is an essential concept when discussing dissemination of readable information, defined as the ability to access, process and interpret information needed to reach an informed health-related decision [49]. A significant proportion of the global population shows only the lowest or basic levels of literacy [50]. Low health literacy is strongly associated with increased hospitalization and mortality [51], and low coronavirus-related eHealth literacy is associated with less adherence to preventive measures that may mitigate infectious spread [52]. For web-based information to be understood and applied by the general population, it needs to be readable by a large proportion of the population. To a significant extent, the included studies relied on automated readability formulas to draw conclusions. Automated readability formulas has been criticized for being heavily reliant on word and sentence factors, while ignoring other readability-related aspects such as the inclusion of graphics and comprehension [53]. While most studies used automated readability formulas, one study also suggested low prevalence of infographics and few websites containing information in alternative languages. Taken together, our results suggest a problematic situation with most websites exceeding the recommended readability levels and not meeting the literacy found in the diverse population. A large majority of the included studies assessed websites in the English language. During a pandemic, high-quality information needs to reach diverse populations. There is a need to evaluate readability in more languages and by utilizing additional readability assessment methods other than exclusively relying on automated readability formulas, including empirical studies asking the intended end-users to rate the readability of sources.

A range of instruments for quality assessment was utilized in the included reports, which combined illustrate varied quality from the perspectives of several quality criteria, with a tendency towards poor or moderate quality. Our results are highly similar to the findings reported in a previous review investigating the quality of online health information in general [11], indicating that the problematic situation of low quality also is applicable in the context of COVID-19. Website quality is a multidimensional concept involving several different quality criteria [19], of which many were represented in the included studies. In addition to investigating readability, accuracy, and comprehensiveness/completeness, nine quality assessment tools were utilized in the included reports. The assessment tools focus on various aspects of quality (Additional File 4), involving aspects such as usefulness, reliability, content, identification, structure, usability, understandability, and specific information related to COVID-19. There are a number of diverse quality criteria and standards not addressed in the included studies, and thus, it is probable that more studies are needed to fully cover the multidimensional nature of the concept. Nevertheless, the results illustrate that quality of web-based sources about COVID-19 is substandard, based on several criteria and from the perspectives of multiple assessors. Inspecting the reported subsample scores, we did not clearly identify any specific sources that stood out as having particularly low quality, with high variability regardless of source. This calls attention to the likely situation that substandard quality is widely represented within the online landscape, regardless of the type of source behind web-based information. Laypersons who search for health-related information on the Web report low self-efficacy in their ability to successfully identify high-quality online information [48], calling attention to the importance of adequate support needed to encourage the use of adequate information sources. We urge developers, decision-makers and stakeholders to take actions with the aim to increase the probability that the general population encounters high-quality information when accessing the Web to read about COVID-19 or other communicable diseases causing epidemics and pandemics.

There are methodological limitations of this rapid review that needs to be considered when interpreting the results. First, the last author was responsible for screening the hits retrieved in five electronic databases. It is possible that some studies relevant for this review were unintentionally excluded, due to using only one researcher for the screening and eligibility assessment. On the other hand, the use of several databases, citation manager, and screening software implicate a reduced risk of this potential error. Further, we did not screen any grey literature, which may involve a risk of overlooking relevant research not published in the utilized electronic databases for scholarly journals. Second, the last author performed the methodological appraisal and risk of bias assessment, which was scrutinized by the first author until consensus was reached. Due to the lack of widely established instruments for systematic appraisal, the process was conducted by utilizing modified versions of instruments used in previous research [10, 22]. We acknowledge this limitation and encourage the development of valid instruments used for systematic appraisal of methodology and risk of bias in empirical studies investigating quality of web-based information. Third, the quality assessment instruments used in the included studies may involve methodological concerns such as assessor bias and automated readability formulas. The range of methodologies used in the included studies illustrate the multidimensional aspects of the concept of quality criteria, but also calls attention to the need for establishing standards for researchers conducting empirical systematic quality assessments. We appraised the studies as having an overall good methodological quality. However, a number of the investigated quality benchmarks were not reported in the studies, particularly using consumer involvement in the search or assessment process and determining interrater reliability in website selection. Finally, this review highlights reports detailing the quality of web-based information during the first year of the pandemic. The global scenario surrounding COVID-19 is constantly changing and new public health interventions including vaccines have been implemented since the time of our searches. Readers must take time-sensitive aspects into consideration when interpreting our findings. We acknowledge a need for future updated reviews that summarize and synthesize the evidence of public information in later stages of the pandemic. These methodological aspects should be considered and reported when planning future infodemiology studies assessing quality of web-based information.

Conclusion

This rapid review highlights quality deficits of web-based information about COVID-19 published during the first year of the pandemic, suggesting a high probability that this hindered the general population from being adequately informed. The results call attention to the need of ensuring the dissemination of high-quality information when communicable diseases cause epidemics or pandemics. Considering the high risk of encountering substandard quality when searching for information, developers, decision-makers and stakeholders need to take actions aimed to increase the likelihood of successful dissemination of trustworthy and accurate information that promotes behavioral changes needed to mitigate the spread and impact of epidemics and pandemics. Future research should address the highlighted quality deficits, identify methods that aid citizens in their retrieval of high-quality information during pandemics, and finally, identify interventions that can help change and improve the online landscape.

Availability of data and materials

All data generated or analysed during this study are included in this published article [and its supplementary information files].

Abbreviations

ARI:

Automated Readability Index

CLI:

Coleman-Liau Index

COVID-19:

Coronavirus Disease 2019

CSS:

Novel COVID-19 Specific Score

EQIP:

Ensuring Quality Information for Patients

FKGL:

Flesch Kincaid Grade Level

FORCAST:

Ford, Caylor, Sticht Formula

FRE:

Flesch Reading Ease

GFI:

Gunning Fog Index

GQS:

Global Quality Score

JAMA:

Journal of Americal Medical Association benchmarks

IRR:

Interrater Reliability

MICI:

Medical Information and Content Index

PEMAT-A:

Patient Education Materials Assessment Tool – Actionability subscale

PEMAT-U:

Patient Education Materials Assessment Tool – Understandability subscale

SMOG:

Simple Measure of Gobbledygook

TCCI:

Title–Content Consistency Index

References

  1. Xie Y, Wang Z, Liao H, Marley G, Wu D, Tang W. Epidemiologic, clinical, and laboratory findings of the COVID-19 in the current pandemic: systematic review and meta-analysis. BMC Infect Dis. 2020;20:640.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  2. Tian W, Jiang W, Yao J, Nicholson CJ, Li RH, Sigurslid HH, et al. Predictors of mortality in hospitalized COVID-19 patients: a systematic review and meta-analysis. J Med Virol. 2020;92:1875–83.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  3. Izcovich A, Ragusa MA, Tortosa F, Lavena Marzio MA, Agnoletti C, Bengolea A, et al. Prognostic factors for severity and mortality in patients infected with COVID-19: a systematic review. PLoS One. 2020;15:e0241955.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  4. Arefi MF, Poursadeqiyan M. A review of studies on the COVID-19 epidemic crisis disease with a preventive approach. Work. 2020;66:717–29.

    Article  PubMed  Google Scholar 

  5. Rios P, Radhakrishnan A, Williams C, Ramkissoon N, Pham B, Cormack GV, et al. Preventing the transmission of COVID-19 and other coronaviruses in older adults aged 60 years and above living in long-term care: a rapid review. Syst Rev. 2020;9:218.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Anwar A, Malik M, Raees V, Anwar A. Role of Mass media and public health communications in the COVID-19 pandemic. Cureus. 2020;12:e10453.

    PubMed  PubMed Central  Google Scholar 

  7. Le HT, Nguyen DN, Beydoun AS, Le XTT, Nguyen TT, Pham QT, et al. Demand for health information on COVID-19 among Vietnamese. Int J Environ Res Public Health. 2020;17:4377.

    Article  CAS  PubMed Central  Google Scholar 

  8. Jo W, Lee J, Park J, Kim Y. Online information exchange and anxiety spread in the early stage of the Novel Coronavirus (COVID-19) outbreak in South Korea: structural topic model and network analysis. J Med Internet Res. 2020;22:e19455.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Cline RJ, Haynes KM. Consumer health information seeking on the internet: the state of the art. Health Educ Res. 2001;16:671–92.

    Article  CAS  PubMed  Google Scholar 

  10. Eysenbach G, Powell J, Kuss O, Sa E-R. Empirical studies assessing the quality of health information for consumers on the world wide web: a systematic review. JAMA. 2002;287:2691–700.

    Article  PubMed  Google Scholar 

  11. Daraz L, Morrow AS, Ponce OJ, Beuschel B, Farah MH, Katabi A, et al. Can patients trust online health information? A meta-narrative systematic review addressing the quality of health information on the internet. J Gen Intern Med. 2019;34:1884–91.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Daraz L, Morrow AS, Ponce OJ, Farah W, Katabi A, Majzoub A, et al. Readability of online health information: a meta-narrative systematic review. Am J Med Qual. 2018;33:487–92.

    Article  PubMed  Google Scholar 

  13. Abdel-Wahab N, Rai D, Siddhanamatha H, Dodeja A, Suarez-Almazor ME, Lopez-Olivo MA. A comprehensive scoping review to identify standards for the development of health information resources on the internet. PLoS One. 2019;14:e0218342.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  14. Eysenbach G. Infodemiology and infoveillance: framework for an emerging set of public health informatics methods to analyze search, communication and publication behavior on the Internet. J Med Internet Res. 2009;11:e11.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Clarke MA, Moore JL, Steege LM, Koopman RJ, Belden JL, Canfield SM, et al. Health information needs, sources, and barriers of primary care patients to achieve patient-centered care: a literature review. Health Informatics J. 2016;22:992–1016.

    Article  PubMed  Google Scholar 

  16. The Lancet Infectious Diseases. The COVID-19 infodemic. Lancet Infect Dis. 2020;20:875.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  17. Calleja N, AbdAllah A, Abad N, Ahmed N, Albarracin D, Altieri E, et al. A public health research agenda for managing infodemics: methods and results of the first WHO infodemiology conference. JMIR Infodemiology. 2021;1:e30979.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Tricco AC, Antony J, Zarin W, Strifler L, Ghassemi M, Ivory J, et al. A scoping review of rapid review methods. BMC Med. 2015;13:224.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  19. Sun Y, Zhang Y, Gwizdka J, Trace CB. Consumer evaluation of the quality of online health information: systematic literature review of relevant criteria and indicators. J Med Internet Res. 2019;21:e12522.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Roy Rosenzweig Center for History and New Media. Zotero. 2021. https://www.zotero.org/. Accessed 2 Mar 2021.

  21. Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan—a web and mobile app for systematic reviews. Syst Rev. 2016;5:210.

    Article  PubMed  PubMed Central  Google Scholar 

  22. National Heart, Lung and Blood Institute. Study Quality Assessment Tools. https://www.nhlbi.nih.gov/health-topics/study-quality-assessment-tools. Accessed 25 Feb 2021.

  23. The Joint Commission. Advancing effective communication, cultural competence, and patient- and family-centered care: a roadmap for Hospitals. Oakbrook Terrace, IL: The Joint Commission; 2010.

    Google Scholar 

  24. Ma Y, Yang AC, Duan Y, Dong M, Yeung AS. Quality and readability of online information resources on insomnia. Front Med. 2017;11:423–31.

    Article  PubMed  Google Scholar 

  25. Graneheim UH, Lundman B. Qualitative content analysis in nursing research: concepts, procedures and measures to achieve trustworthiness. Nurse Educ Today. 2004;24:105–12.

    Article  CAS  PubMed  Google Scholar 

  26. Basch CH, Mohlman J, Hillyer GC, Garcia P. Public health communication in time of crisis: readability of on-line COVID-19 information. Disaster Med Public Health Prep. 2020;14:635–7.

    Article  PubMed  CAS  Google Scholar 

  27. Worrall AP, Connolly MJ, O’Neill A, O’Doherty M, Thornton KP, McNally C, et al. Readability of online COVID-19 health information: a comparison between four English speaking countries. BMC Public Health. 2020;20:1635.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  28. Szmuda T, Özdemir C, Ali S, Singh A, Syed MT, Słoniewski P. Readability of online patient education material for the novel coronavirus disease (COVID-19): a cross-sectional health literacy study. Public Health (Elsevier). 2020;185:21–5.

    Article  CAS  Google Scholar 

  29. Kruse J, Toledo P, Belton TB, Testani EJ, Evans CT, Grobman WA, et al. Readability, content, and quality of COVID-19 patient education materials from academic medical centers in the United States. Am J Infect Control. 2020. https://doi.org/10.1016/j.ajic.2020.11.023.

  30. Khan S, Asif A, Jaffery AE. Language in a time of COVID-19: literacy bias ethnic minorities face during COVID-19 from online information in the UK. J Racial Ethn Health Disparities. 2020. https://doi.org/10.1007/s40615-020-00883-8.

    Article  PubMed  PubMed Central  Google Scholar 

  31. Jayasinghe R, Ranasinghe S, Jayarajah U, Seneviratne S. Quality of online information for the general public on COVID-19. Patient Educ Couns. 2020. https://doi.org/10.1016/j.pec.2020.08.001.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Cuan-Baltazar JY, Muñoz-Perez MJ, Robledo-Vega C, Pérez-Zepeda MF, Soto-Vega E. Misinformation of COVID-19 on the internet: infodemiology study. JMIR Public Health Surveill. 2020;6:e18444.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Joshi A, Kajal F, Bhuyan SS, Sharma P, Bhatt A, Kumar K, et al. Quality of Novel Coronavirus related health information over the internet: an evaluation study. ScientificWorldJournal. 2020;2020:1562028.

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  34. Fan KS, Ghani SA, Machairas N, Lenti L, Fan KH, Richardson D, et al. COVID-19 prevention and treatment information on the internet: a systematic analysis and quality assessment. BMJ Open. 2020;10:e040487.

    Article  PubMed  PubMed Central  Google Scholar 

  35. Szmuda T, Syed MT, Singh A, Ali S, Özdemir C, Słoniewski P. YouTube as a source of patient information for Coronavirus Disease (COVID-19): a content-quality and audience engagement analysis. Rev Med Virol. 2020;30:e2132.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  36. Yuksel B, Cakmak K. Healthcare information on YouTube: pregnancy and COVID-19. Int J Gynaecol Obstet. 2020;150:189–93.

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  37. Li HO-Y, Bailey A, Huynh D, Chan J. YouTube as a source of information on COVID-19: a pandemic of misinformation? BMJ Glob Health. 2020;5:e002604.

    Article  PubMed  PubMed Central  Google Scholar 

  38. Basch CH, Hillyer GC, Meleo-Erwin ZC, Jaime C, Mohlman J, Basch CE. Preventive behaviors conveyed on YouTube to mitigate transmission of COVID-19: cross-sectional study. JMIR Public Health Surveill. 2020;6:e18807.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Basch CE, Basch CH, Hillyer GC, Jaime C. The role of YouTube and the entertainment industry in saving lives by educating and mobilizing the public to adopt behaviors for community mitigation of COVID-19: successive sampling design study. JMIR Public Health Surveill. 2020;6:e19145.

    Article  PubMed  PubMed Central  Google Scholar 

  40. Hernández-García I, Giménez-Júlvez T. Characteristics of youtube videos in spanish on how to prevent COVID-19. Int J Environ Res Public Health. 2020;17:1–10.

    Google Scholar 

  41. Hernández-García I, Giménez-Júlvez T. Assessment of health information about COVID-19 prevention on the internet: Infodemiological Study. JMIR Public Health Surveill. 2020;6:e18717.

    Article  PubMed  PubMed Central  Google Scholar 

  42. Hernández-García I, Giménez-Júlvez T. Information in spanish on the internet about the prevention of COVID-19. Int J Environ Res Public Health. 2020;17:1–11.

    Google Scholar 

  43. Taylor-Phillips S, Berhane S, Sitch AJ, Freeman K, Price MJ, Davenport C, et al. Information given by websites selling home self-sampling COVID-19 tests: an analysis of accuracy and completeness. BMJ Open. 2020;10:e042453.

    Article  PubMed  Google Scholar 

  44. Rachul C, Marcon AR, Collins B, Caulfield T. COVID-19 and immune boosting’ on the internet: a content analysis of Google search results. BMJ Open. 2020;10:e040989.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Moon H, Lee GH. Evaluation of Korean-Language COVID-19-related medical information on YouTube: cross-sectional infodemiology study. J Med Internet Res. 2020;22:e20775.

    Article  PubMed  PubMed Central  Google Scholar 

  46. Khatri P, Singh SR, Belani NK, Yeong YL, Lohan R, Lim YW, et al. YouTube as source of information on 2019 novel coronavirus outbreak: a cross sectional study of English and Mandarin content. Travel Med Infect Dis. 2020;35:101636.

    Article  PubMed  PubMed Central  Google Scholar 

  47. D’Souza RS, D’Souza S, Strand N, Anderson A, Vogt MNP, Olatoye O. YouTube as a source of medical information on the novel coronavirus 2019 disease (COVID-19) pandemic. Glob Public Health. 2020;15:935–42.

    Article  PubMed  Google Scholar 

  48. Kim H, Xie B. Health literacy in the eHealth era: a systematic review of the literature. Patient Educ Couns. 2017;100:1073–82.

    Article  PubMed  Google Scholar 

  49. Sørensen K, Van den Broucke S, Fullam J, Doyle G, Pelikan J, Slonska Z, et al. Health literacy and public health: a systematic review and integration of definitions and models. BMC Public Health. 2012;12:80.

    Article  PubMed  PubMed Central  Google Scholar 

  50. OECD. OECD Skills Outlook 2013: First Results from the Survey of Adult Skills. OECD Publishing; 2013. https://ccrscenter.org/products-resources/resource-database/oecd-skills-outlook-2013-first-results-survey-adult-skills.

  51. Parker RM, Wolf MS, Kirsch I. Preparing for an epidemic of limited health literacy: weathering the perfect storm. J Gen Intern Med. 2008;23:1273–6.

    Article  PubMed  PubMed Central  Google Scholar 

  52. An L, Bacon E, Hawley S, Yang P, Russell D, Huffman S, et al. Relationship between Coronavirus-related eHealth literacy and COVID-19 knowledge, attitudes, and practices among US adults. J Med Internet Res. 2021. https://doi.org/10.2196/25042.

    Article  PubMed  PubMed Central  Google Scholar 

  53. Friedman DB, Hoffman-Goetz L. A systematic review of readability and comprehension instruments used for print and web-based cancer information. Health Educ Behav. 2006;33:352–73.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

Open access funding provided by Uppsala University. No funding supported this study.

Author information

Authors and Affiliations

Authors

Contributions

JS scrutinized and validated the methodological appraisal of the included publications and critically reviewed the manuscript; SG conceived and designed the study and critically reviewed the manuscript; TC was the principal investigator, conceived and designed the study, conducted the searches and screening procedure, performed the methodological appraisal, extracted data, analyzed the data, visualized the data, wrote the manuscript and was in charge of overall project administration. All authors have approved the final version of the manuscript.

Corresponding author

Correspondence to Tommy Carlsson.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Search details.

Additional file 2.

Instruments for methodological assessment.

Additional file 3.

Methodological presentation and conclusions of the included studies.

Additional file 4.

Content in the quality assessment instruments used in the included studies.

Additional file 5.

Topics in the identified subcategories representing completeness/comprehensiveness, with studies reporting each of the topics and ranges of prevalence reported in the publications.

Additional file 6.

Extracted mean/median scores and prevalence in the included studies.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Stern, J., Georgsson, S. & Carlsson, T. Quality of web-based information about the coronavirus disease 2019: a rapid systematic review of infodemiology studies published during the first year of the pandemic. BMC Public Health 22, 1734 (2022). https://doi.org/10.1186/s12889-022-14086-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12889-022-14086-9

Keywords