Skip to main content

Table 5 Summary of the three sources of validity evidence for the eHealth Literacy Questionnaire

From: Translation, cultural adaptation and validity assessment of the Dutch version of the eHealth Literacy Questionnaire: a mixed-method approach

Validity argumenta

Evidence collected

1. Test content

1.1 The items are clear and understandable to everyone without any technical jargon

No evidence of major misunderstanding observed during cognitive interviewing. The wording or tone of four items were amended based on cognitive interviews and CFA results

1.2. The number of items is appropriate and will not cause unnecessary burden on respondents

No missing values were reported in study 2, indicative of not being overly burdensome for respondents. Number of items was deemed appropriate

1.3. The eHLQ can be administered in various formats to ensure respondents with varied skills can participate

Paper-based format (study 1), face-to-face interviews (study 1) and web-based format were administered. No problems were encountered among any of the formats

1.4 The paper-based or web-based formats of items allow for easy response to items

Some issues were identified in responding to the items during cognitive interviews. The issues related to discordance in resonance with worldviews

2. Response process

2.1 The response option of a four-point ordinal scale is appropriate

Four participants desired an additional response option for the items identified as problems with limited applicability

2.2. Formats of administration do not affect the cognitive process of responding to the items

Not evaluated. Prior studies show no difference in administration formats [41]

2.3 The items are understood by respondents as intended by the test developers

Twelve items showed problems in limited applicability, unclear reference and problems with wording or tone. Comparison to CFA results and discussion with the consensus team led to the revision of eight items

2.4 The items are understood in the same way by respondents as intended across subgroups

Differences in interpretation were seen during cognitive interviews based on (digital) healthcare use. Eight items were identified as having problems with ‘limited applicability’ or ‘resonance with worldview’. This observation was confirmed by invariance tests demonstrating partial invariance between groups based on current diagnosis

Differences in interpretation can be the result of prior experience with digital health technology use. We recommend administers of the eHLQ to collect contextual information on prior and current eHealth use and diagnosis. Also recommend collecting eHLQ validity evidence in different settings and populations and perform invariance testing based on eHealth experience

3. Internal structure

3.1 The items of each construct reflect a spectrum of the relevant construct such that the resulting score is a good indicator of the construct

Only item 26 showed strong residual correlation with seven other items, indicating that the item relates strongly to other items and the underlying latent factor

3.2 The eHLQ is a multidimensional tool consisting of seven independent constructs with 4 to 6 relevant items for each construct and such items are related only to the designated construct

CFA confirmed adequate model fit for the seven-factor model. Model and fit indices were acceptable. Standardized factor loadings were 0.51 or higher (range 0.51 to 0.84). No significant cross-loadings were identified

3.3. The eHLQ demonstrates measurement equivalence across subgroups and settings

eHLQ is partially invariant for subgroups age, gender, educational level and current diagnosis. Items displaying potential non-invariance were revised, triangulated with cognitive interview data and resulted in amendment of four items

3.4 The eHLQ produces stable and consistent results

Cronbach alpha levels were acceptable. Pre-testing was not performed

  1. a validity argument adopted from Cheng et al. (40)