Skip to main content

Revisiting the continuum of resistance model in the digital age: a comparison of early and delayed respondents to the Norwegian counties public health survey

Abstract

Background

The continuum of resistance model’s premise is that delayed respondents to a survey are more similar to non-respondents than early respondents are. For decades, survey researchers have applied this model in attempts to evaluate and adjust for non-response bias. Despite a recent resurgence in the model’s popularity, its value has only been assessed in one large online population health survey.

Methods

Respondents to the Norwegian Counties Public Health Survey in Hordaland, Norway, were divided into three groups: those who responded within 7 days of the initial email/SMS invitation (wave 1, n = 6950); those who responded after 8 to 14 days and 1 reminder (wave 2, n = 4950); and those who responded after 15 or more days and 2 reminders (wave 3, n = 4045). Logistic regression analyses were used to compare respondents’ age, sex and educational level between waves, as well as the prevalence of poor general health, life dissatisfaction, mental distress, chronic health problems, weekly alcohol consumption, monthly binge drinking, daily smoking, physical activity, low social support and receipt of a disability pension.

Results

The overall response to the survey was 41.5%. Respondents in wave 1 were more likely to be older, female and more highly educated than those in waves 2 and 3. However, there were no substantial differences between waves for any health outcomes, with a maximal prevalence difference of 2.6% for weekly alcohol consumption (wave 1: 21.3%, wave 3: 18.7%).

Conclusions

There appeared to be a mild continuum of resistance for demographic variables. However, this was not reflected in health and related outcomes, which were uniformly similar across waves. The continuum of resistance model is unlikely to be useful to adjust for nonresponse bias in large online surveys of population health.

Peer Review reports

Background

Differences are likely between people that respond to public health surveys and those that do not. Among non-respondents, there is commonly a disproportionate number of young [1,2,3,4,5,6,7,8], male [2,3,4,5,6,7,8,9,10], and unmarried people [1, 2, 5, 8, 11,12,13,14], as well as those with lower education [1, 2, 5,6,7,8, 10, 12,13,14,15], and lower socioeconomic status [5, 6, 8, 11, 13, 16]. Non-respondents are also more likely to be smokers [1, 4, 10, 14, 17,18,19], and to have different patterns of alcohol consumption [10, 16, 20,21,22], poorer physical and/or mental health [5, 7, 9, 10, 23], and higher rates of mortality and morbidity [20, 24,25,26,27,28,29]. If researchers fail to account for nonresponse bias, prevalence estimates (in particular) and analyses of associations between variables will likely be incorrect [9].

Many survey investigators hope to reduce the risk of nonresponse bias by achieving a high response rate, and often send multiple reminders to non-respondents, encouraging them to participate. Nevertheless, few large-scale public health surveys achieve a response rate adequate to reduce the likelihood of substantial nonresponse bias, which by some estimates is between 70 and 90% [30]. As participation rates in epidemiologic studies have been declining over time [2, 31], and because nonresponse bias can exist even when response rates are high [32], it is increasingly important for researchers to identify and account for nonresponse bias when summarising and analysing data. Obviously, this is a major challenge because information on non-respondents is often unavailable, particularly for the outcomes of interest.

Researchers have sought methods to identify nonresponse bias for decades. In 1939, Pace proposed that the existence and direction of nonresponse bias in a given survey could be detected by comparing the responses of people who respond quickly (early respondents) to those who only respond after repeated contact attempts (delayed respondents) [33]. This approach, often referred to as the “continuum of resistance” model [34], is based on the presumption that people who are slow or reluctant (i.e. resistant) to complete a questionnaire are more similar to non-respondents than early respondents are.

The continuum of resistance model has resurfaced periodically in the literature since its proposal, despite having performed inconsistently under empirical testing. Some early studies supported the existence of a continuum of resistance for outcomes of interest [35, 36]; however, others have found that early and delayed respondents do not differ at all [37, 38], that associations between delayed respondents and non-respondents are weak [34], or that a continuum of resistance exists for demographic variables but not for outcomes of interest [39]. Recently, the model has been applied in a number of surveys of health-related behaviours [3, 13, 40,41,42,43], and in one national public health survey [44]. In several studies on drug and alcohol use, researchers identified significant and consistent differences between early and late respondents for both demographic variables and outcomes of interests. Subsequently, delayed respondents’ data was used to adjust prevalence estimates to account for nonresponse bias [3, 13, 40, 42, 43].

Given the apparent value of the continuum of resistance model in these recent studies, and because the model has only been applied once in a large online survey of population health [44], we compared early and delayed respondents of the internet-based Norwegian Counties Public Health Survey. We hypothesized that, compared to early respondents, late respondents would be more likely to be male, young and less educated, and that they would have a higher prevalence of poor health outcomes and behaviours associated with poor health.

Methods

Study design and setting

The Norwegian Counties Public Health Survey is an online cross-sectional study of self-reported health, health-related behaviours, quality of life, and local health-related factors in the Norwegian general population. The survey was launched by the Norwegian Institute of Public Health in 2015 and is currently ongoing, covering each of Norway’s 11 counties every 4 years. We performed this investigation using data collected in the county of Hordaland between the 10th of April and the 13th of June 2018. The survey was approved by the Norwegian Data Protection Authority. This study is a secondary analysis of previously collected data, and according to the Norwegian Health Research Act, additional ethical approval was not required.

Participants

A random sample of 38,458 Hordaland County residents was selected from the National Population Register and invited to participate in this survey. To be eligible for the sampling frame, residents needed to be over 18 years of age and have their mobile telephone number and email address registered with the Norwegian Agency for Public Management and eGovernment. This contact register includes approximately 80% of Norwegian residents aged between 18 and 65 years and 50% of those aged over 75 years [45]. The sample size was determined to allow for subgroup analysis on a municipal level (minimum of 400 participants per subgroup), with oversampling of the smallest municipalities and an expected overall response of 30 to 40%.

The questionnaire, which could be completed using a PC, tablet or smartphone, was distributed to the sample by email and short message service (SMS) using a secure platform [46]. Non-respondents received email and SMS reminders with a link to the questionnaire on the 7th and 15th day after the initial distribution, and the survey remained open for 5 weeks and 3 days.

Variables

  • Response to questionnaire invitation: All members of the invited sample were categorised as survey respondents or non-respondents. Respondents were further categorised into one of three groups based on when they completed the questionnaire: (1) Wave 1 - completed the questionnaire prior to the first reminder; (2) Wave 2 - completed the questionnaire between the first and second reminders; (3) Wave 3 - completed the questionnaire after the second reminder.

  • Gender: Male or female, as recorded in the national population register.

  • Age: Categorised as 18–29, 30–39, 40–49, 50–59, 60–69 and 70 or older

  • Education: Respondents were categorised according to their on highest-attained level of education: junior high school (up to and including 10th grade), senior high school (up to and including 13th grade), university or university college for less than 4 years, and university or university college for 4 or more years

  • Poor general health: Respondents were categorised as having poor general health if they reported having bad or very bad general health on a 5-point scale including very good, good, neither good nor bad, bad and very bad.

  • Dissatisfied with life: Respondents were categorised as being dissatisfied with life if they reported being quite dissatisfied or very dissatisfied on a 5-point scale with the alternatives very satisfied, quite satisfied, neither satisfied nor dissatisfied, quite dissatisfied, and very dissatisfied.

  • Mental distress: Based on the five items’ version of the Hopkins Symptom Checklist (HSCL-5), with four response options ranging from not at all (1 point) to extremely (4 points). We classified respondents with a mean item score greater than 2 as having high levels of mental distress [47].

  • Chronic health problems: Respondents who reported having a chronic health problem or disability that has lasted at least 6 months (including seasonal and intermittent problems).

  • Alcohol consumption: Based on the consumption questions of the Alcohol Use Disorders Identification Test (AUDIT) [48]. Respondents were categorised into those who did and did not drink alcohol more than once a week, and those who did and did not consume six or more standard drinks (10 g ethanol) in a single session more than once a month (monthly binge drinkers).

  • Daily smoking: Respondents who reported that they smoked tobacco on a daily basis

  • Physical activity: Based on the International Physical Activity Questionnaire (IPAQ). Respondents were classified as being physically active if they performed moderate or vigorous physical activity daily or walked for at least 30 min every day.

  • Low social support: Respondents who reported that they experienced low social support using a previously described categorisation of the Oslo-3 Social Support Scale (OSS-3) [49].

  • Receiving disability pension: Respondents who reported that they currently receive a disability pension.

Statistical analyses

First, to identify how questionnaire response varied by age and gender, we calculated response rates separately for each response wave by age category and gender. We then assessed age and gender differences between respondents and non-respondents by comparing the age category and gender distribution of the invited sample with the distributions within each response wave and the distribution among non-respondents (made possible by information from the national population register). To assess educational differences between waves, we were limited to comparing the proportion of respondents reporting each level of education within each response wave (as information on non-respondents’ educational level was not available). Finally, to assess for a potential continuum of resistance in our data, we assessed associations between respondents’ wave (independent variable) and general health, life satisfaction, mental distress, chronic health problems, alcohol consumption, smoking, physical activity, social support, and receiving a disability pension (dependent variables). These were performed both as univariate analyses and as multivariable analyses which included age and sex as covariates.

We performed binomial logistical regression in analyses involving dichotomous dependent variables, and multinomial regression when dependent variables had more than two outcome categories. Weighting was used to correct for oversampling of small municipalities. All analyses were conducted in R [50]. We used the packages nnet to fit multinomial log-linear models [51], margins and effects to generate marginal effects [52,53,54], and ggplot2 to produce figs [55]. A 95% confidence interval (CI) was calculated for all estimates, and we used a significance threshold of .05.

Results

The overall response rate to the questionnaire was 41.5%, with 44% of responses received in wave 1, 31% in wave 2, and 25% in wave 3 (Fig. 1). The response rate was substantially higher among females (46%) than among males (37%), and it was higher in older age groups (Fig. 2).

Fig. 1
figure1

Number of questionnaire responses on each the day of the survey and in each response waves: Wave 1 (0–7 days from the initial questionnaire distribution), Wave 2 (8–14 days) and Wave 3 (15–45 days). Reminders to complete the questionnaire were distributed on days 8 and 15. Had the survey not used a second reminder, it is likely that many respondents of the third wave would have remained non-respondents

Fig. 2
figure2

Questionnaire response rate by age group and gender. The overall response rate is shown to the right of each bar, and the response rate for each wave is shown within each segment of the bars

The age distribution of the invited sample, each response wave and non-respondents is shown in Fig. 3. Younger people were under-represented and older people were over-represented among respondents, particularly in waves 1 and 2. Similarly, males were under-represented among respondents, particularly in waves 1 and 2 (Fig. 4).

Fig. 3
figure3

Age group proportions within the invited sample, each response wave, and non-respondents

Fig. 4
figure4

Proportion of males and females within the invited sample, each response wave, and non-respondents

There were small differences in the distribution of respondents’ educational level between each wave, with wave 1 respondents having a relatively higher level than wave 2 and 3 respondents (Fig. 5).

Fig. 5
figure5

Distribution of educational level in each response wave

Health outcomes

Table 1 and Fig. 6 show the proportion of respondents in each wave that reported poor general health, life dissatisfaction, mental distress, chronic health problems, drinking alcohol more than once per week, monthly binge drinking, daily smoking, physical activity, low social support, and receiving a disability pension. Table 2 shows the results of pairwise comparisons of each wave using logistic regression analyses. Wave 2 respondents had, in comparison to wave 1, a lower prevalence of poor general health, life dissatisfaction, mental distress, and weekly alcohol consumption. Wave 3 respondents had, in comparison to wave 1, a higher prevalence of mental distress and daily smoking, and a lower prevalence of chronic health problems and weekly alcohol consumption. Additionally, wave 3 respondents had a higher prevalence of mental distress than wave 2 respondents did. All differences between groups were small, with the maximum absolute prevalence difference between the highest and lowest waves being 2.6% (weekly alcohol consumption, Table 1).

Table 1 Prevalence (%, [95% CI]) of health and health-related outcomes among respondents in waves 1, 2 and 3
Fig. 6
figure6

Prevalence of various health and health-related outcomes among questionnaire respondents in waves 1, 2 and 3. Red dots indicate significant differences from wave 1 (p < 0.05) and the black dot indicates a significant difference between wave 2 and wave 3 (p < 0.05)

Table 2 Results of logistic regression models (odds ratios [95% confidence interval])

Our findings were similar when age and sex were included as covariates in these analyses (see supplementary material).

Discussion

Survey researchers have retained interest in the continuum of resistance model for 80 years, despite conflicting evidence of its validity. Today, with response rates generally declining [2, 31], finding effective ways to assess nonresponse bias is as important as ever.

We applied the continuum of resistance model to a large online public health survey, comparing respondents who completed the questionnaire within the first 7 days (wave 1), those who completed it after 8 to 15 days and one reminder (wave 2) and those who completed it after 16 or more days and two reminders (wave 3). For demographic variables, we identified differences between waves that were consistent with previous literature [1,2,3,4,5,6,7,8,9,10, 12,13,14,15]. However, any differences in health outcomes and behaviours were small between waves and were unlikely to be useful in identifying nonresponse bias. Overall, females and older people were more likely to respond to the questionnaire than males and younger people were. This was most pronounced among wave 1 and 2 respondents, whereas wave 3 more closely resembled the invited sample, containing a higher proportion of males and younger people. However, it is important to note that for sex and age, the composition of wave 3 more closely resembled waves 1 and 2 than it did the non-respondents.

For education, our findings were similar. There was a slight trend towards wave 1 respondents being more highly educated than those in wave 2 and 3. However, the difference between respondents and non-respondents is likely to be much larger than the small differences between response waves. Although we lacked direct information on the education level of non-respondents, data from Statistics Norway show that 35% of Hordaland county residents have tertiary education, and that 24% have only completed junior high school [56]. These proportions differ markedly from our results (52 and 13%, respectively), suggesting that non-respondents had far lower levels of education than respondents did.

Based on the continuum of resistance model, we expected that late respondents would display an overall pattern of poorer health across health outcomes. This has been found in a number of recent studies. For example, compared to early respondents, late respondents have been found to have a 21 to 68% higher prevalence of monthly binge drinking [3, 40, 42, 43], a 30% higher prevalence of current smokers [57], and a 50% higher prevalence of people who complete less than 30 min per day of physical activity [40]. We aligned our outcome definitions to facilitate comparisons with these studies, but did not find the same results. There was no difference between waves in the prevalence of monthly binge drinking or physical inactivity, and for current smoking, the difference in prevalence between waves 1 and 3 was only 1.4 percentage points. Our findings were similar for other health outcomes; in some cases, there were statistically significant but very small prevalence differences between waves, and in others there were none.

Our findings are supported by a recent comparison of early and late respondents to a national online health survey in the Netherlands [44]. In that study, only small differences in health-related outcomes were identified between response waves, despite substantial differences in socio-demographic variables between waves. Further, when analyses were adjusted for sociodemographic variables, the differences in health-related outcomes all but disappeared. In our results, there was little change when age and sex were adjusted for.

There are several potential explanations for why we did not find evidence to support a continuum of resistance in our data. Indeed, it is possible that the health status of respondents and non-respondents is very similar in our population. We believe this is unlikely, particularly considering the findings of Knudsen et al., who, in 2008, reported a substantially higher prevalence of mental and somatic health disorders among non-respondents to a health survey conducted in Hordaland county [9]. Our definition of late respondents differs from some recent studies demonstrating a continuum-of-resistance, which have used more reminders [43], longer follow-up periods [3, 12, 40,41,42], and/or alternative methods such as telephone calls to contact slow respondents [3, 41, 42]. To our knowledge, this is the only continuum-of-resistance study besides those of Kypri et al. [40, 41] and Klingwort [44] to collect data using purely digital means. It is possible that the barriers to questionnaire completion differ between postal, telephone and internet/smartphone surveys, and that the data collection method has consequences on any eventual continuum of resistance.

This study has several limitations that we were unable to account for, and that may have affected our findings. First, we had no information about the health status of non-respondents, but rather we assumed that there were differences based on previous research. Future studies linking survey data with other sources, such as national registers, are necessary to gain more information on the health status of non-respondents. Additionally, to be eligible for inclusion in the survey, people had to have their digital contact information registered with the Norwegian authorities. This introduces a selection bias that is particularly pronounced among older people [45]. It is therefore likely that the health status of the survey sample is more homogeneous than it is in the general population.

Conclusion

We found that keeping the survey open for an extended period and using multiple reminders increased the overall proportion of male, younger and less-educated respondents. However, we were unable to identify meaningful differences in reported health and health determinants between early and late survey respondents. Assuming there are true differences in the health status of respondents and non-respondents, the results of delayed respondents provided little help in estimating the direction or magnitude of non-response bias.

Availability of data and materials

Public access to the database is closed. However, data may be available to researchers with study questions that fall within the general aims of Norwegian Counties Public Health Survey. Applicants and projects must fulfill requirements in Norwegian regulations and laws concerning research and protection of personal information (GDPR). Enquiries can be sent to fylkeshelseundersokelser@fhi.no.

Abbreviations

SMS:

Short Message Service

HSCL-5:

Hopkins Symptom Checklist - 5

AUDIT:

Alcohol Use Disorders Identification Test

IPAQ:

International Physical Activity Questionnaire

OSS-3:

Oslo-3 Social Support Scale

References

  1. 1.

    Tolonen H, Dobson A, Kulathinal S. Effect on trend estimates of the difference between survey respondents and non-respondents: results from 27 populations in the WHO MONICA project. Eur J Epidemiol. 2005;20(11):887–98. https://doi.org/10.1007/s10654-005-2672-5.

    Article  PubMed  Google Scholar 

  2. 2.

    Tolonen H, Helakorpi S, Talala K, Helasoja V, Martelin T, Prättälä R. 25-year trends and socio-demographic differences in response rates: Finnish adult health behaviour survey. Eur J Epidemiol. 2006;21(6):409–15. https://doi.org/10.1007/s10654-006-9019-8.

    Article  PubMed  Google Scholar 

  3. 3.

    Maclennan B, Kypri K, Langley J, Room R. Non-response bias in a community survey of drinking, alcohol-related experiences and public opinion on alcohol policy. Drug Alcohol Depend. 2012;126(1–2):189–94. https://doi.org/10.1016/j.drugalcdep.2012.05.014.

    Article  PubMed  Google Scholar 

  4. 4.

    Abrahamsen R, Svendsen MV, Henneberger PK, Gundersen GF, Torén K, Kongerud J, et al. Non-response in a cross-sectional study of respiratory health in Norway. BMJ Open. 2016;6(1):e009912. https://doi.org/10.1136/bmjopen-2015-009912.

    Article  PubMed  PubMed Central  Google Scholar 

  5. 5.

    Lundberg I, Damström Thakker K, Hällström T, Forsell Y. Determinants of non-participation, and the effects of non-participation on potential cause-effect relationships, in the PART study on mental disorders. Soc Psychiatry Psychiatr Epidemiol. 2005;40(6):475–83. https://doi.org/10.1007/s00127-005-0911-4.

    Article  PubMed  Google Scholar 

  6. 6.

    Søgaard AJ, Selmer R, Bjertness E, Thelle D. The Oslo Health Study: The impact of self-selection in a large, population-based survey. Int J Equity Health. 2004;3(1).

  7. 7.

    Van Den Akker M. Morbidity in responders and non-responders in a register-based population survey. Fam Pract. 1998;15(3):261–3. https://doi.org/10.1093/fampra/15.3.261.

    Article  PubMed  Google Scholar 

  8. 8.

    Nummela O, Sulander T, Helakorpi S, Haapola I, Uutela A, Heinonen H, et al. Register-based data indicated nonparticipation bias in a health study among aging people. J Clin Epidemiol. 2011;64(12):1418–25. https://doi.org/10.1016/j.jclinepi.2011.04.003.

    Article  PubMed  Google Scholar 

  9. 9.

    Knudsen AK, Hotopf M, Skogen JC, Overland S, Mykletun A. The health status of nonparticipants in a population-based health study: the Hordaland health study. Am J Epidemiol. 2010;172(11):1306–14. https://doi.org/10.1093/aje/kwq257.

    Article  PubMed  Google Scholar 

  10. 10.

    Cheung KL, Ten Klooster PM, Smit C, De Vries H, Pieterse ME. The impact of non-response bias due to sampling in public health studies: A comparison of voluntary versus mandatory recruitment in a Dutch national survey on adolescent health. BMC Public Health. 2017;17(1).

  11. 11.

    Bergstrand R, Vedin A, Wilhelmsson C, Wilhelmsen L. Bias due to non-participation and heterogenous sub-groups in population surveys. J Chronic Dis. 1983;36(10):725–8. https://doi.org/10.1016/0021-9681(83)90166-2.

    CAS  Article  PubMed  Google Scholar 

  12. 12.

    Korkeila K, Suominen S, Ahvenainen J, Ojanlatva A, Rautava P, Helenius H, et al. Non-response and related factors in a nation-wide health survey. Eur J Epidemiol. 2001;17(11):991–9. https://doi.org/10.1023/A:1020016922473.

    CAS  Article  PubMed  Google Scholar 

  13. 13.

    Zhao J, Stockwell T, Macdonald S. Non-response bias in alcohol and drug population surveys. Drug Alcohol Review. 2009;28(6):648–57. https://doi.org/10.1111/j.1465-3362.2009.00077.x.

    Article  PubMed  Google Scholar 

  14. 14.

    Enzenbach C, Wicklein B, Wirkner K, Loeffler M. Evaluating selection bias in a population-based cohort study with low baseline participation: the LIFE-adult-study. BMC Med Res Methodol. 2019;19(1):135. https://doi.org/10.1186/s12874-019-0779-8.

    Article  PubMed  PubMed Central  Google Scholar 

  15. 15.

    Rodes A, Sans S, Balaña LL, Paluzie G, Aguilera R, Balaguer-Vintro I. Recruitment methods and differences in early, late and non-respondents in the first MONICA-Catalonia population survey. Rev Epidemiol Sante Publique. 1990;38(5–6):447–53.

    CAS  PubMed  Google Scholar 

  16. 16.

    Ohlson CG, Ydreborg B. Participants and non-participants of different categories in a health survey. A cross-sectional register study. Scand J Soc Med. 1985;13(2):67–74. https://doi.org/10.1177/140349488501300203.

    CAS  Article  PubMed  Google Scholar 

  17. 17.

    Criqui MH, Barrett-Connor E, Austin M. Differences between respondents and non-respondents in a population-based cardiovascular disease study. Am J Epidemiol. 1978;108(5):367–72. https://doi.org/10.1093/oxfordjournals.aje.a112633.

    CAS  Article  PubMed  Google Scholar 

  18. 18.

    Helakorpi S, Makela P, Holstila A, Uutela A, Vartiainen E. Can the accuracy of health behaviour surveys be improved by non-response follow-ups? Eur J Pub Health. 2015;25(3):487–90. https://doi.org/10.1093/eurpub/cku199.

    Article  Google Scholar 

  19. 19.

    Rönmark E, Lundqvist A, Lundbäck B, Nyström L. Non-responders to a postal questionnaire on respiratory symptoms and diseases. Eur J Epidemiol. 1999;15(3):293–9. https://doi.org/10.1023/A:1007582518922.

    Article  PubMed  Google Scholar 

  20. 20.

    Tolonen H, Laatikainen T, Helakorpi S, Talala K, Martelin T, Prättälä R. Marital status, educational level and household income explain part of the excess mortality of survey non-respondents. Eur J Epidemiol. 2010;25(2):69–76. https://doi.org/10.1007/s10654-009-9389-9.

    Article  PubMed  Google Scholar 

  21. 21.

    Wild TC, Cunningham J, Adlaf E. Nonresponse in a follow-up to a representative telephone survey of adult drinkers. J Stud Alcohol. 2001;62(2):257–61. https://doi.org/10.15288/jsa.2001.62.257.

    CAS  Article  PubMed  Google Scholar 

  22. 22.

    Lahaut VMHCJ, Jansen HAM, van de Mheen D, Garretsen HFL. Non-response bias in a sample survey on alcohol consumption. Alcohol Alcohol. 2002;37(3):256–60. https://doi.org/10.1093/alcalc/37.3.256.

    Article  PubMed  Google Scholar 

  23. 23.

    Hoeymans N, Feskens EJM, Van Den Bos GAM, Kromhout D. Non-response bias in a study of cardiovascular diseases, functional status and self-rated health among elderly men. Age Ageing. 1998;27(1):35–40. https://doi.org/10.1093/ageing/27.1.35.

    CAS  Article  PubMed  Google Scholar 

  24. 24.

    Christensen AI, Ekholm O, Gray L, Glumer C, Juel K. What is wrong with non-respondents? Alcohol-, drug- and smoking-related mortality and morbidity in a 12-year follow-up study of respondents and non-respondents in the Danish Health and Morbidity Survey. Addiction. 2015;110(9):1505–12.

    Article  Google Scholar 

  25. 25.

    Jousilahti P, Salomaa V, Kuulasmaa K, Niemelä M, Vartiainen E. Total and cause specific mortality among participants and non-participants of population based health surveys: a comprehensive follow up of 54 372 Finnish men and women. J Epidemiol Community Health. 2005;59(4):310–5. https://doi.org/10.1136/jech.2004.024349.

    Article  PubMed  PubMed Central  Google Scholar 

  26. 26.

    Suominen S, Koskenvuo K, Sillanmäki L, Vahtera J, Korkeila K, Kivimäki M, et al. Non-response in a nationwide follow-up postal survey in Finland: a register-based mortality analysis of respondents and non-respondents of the health and social support (HeSSup) study. BMJ Open. 2012;2(2):e000657. https://doi.org/10.1136/bmjopen-2011-000657.

    Article  PubMed  PubMed Central  Google Scholar 

  27. 27.

    Cohen G, Duffy J. Are nonrespondents to health surveys less healthy than respondents? J Off Stat. 2002;18(1):13–23.

    Google Scholar 

  28. 28.

    Keyes KM, Rutherford C, Popham F, Martins SS, Gray L. How healthy are survey respondents compared with the general population?: using survey-linked death records to compare mortality outcomes. Epidemiology. 2018;29(2):299–307. https://doi.org/10.1097/EDE.0000000000000775.

    Article  PubMed  PubMed Central  Google Scholar 

  29. 29.

    Barchielli A. Nine-year follow-up of a survey on smoking habits in Florence (Italy): higher mortality among non-responders. Int J Epidemiol. 2002;31(5):1038–42. https://doi.org/10.1093/ije/31.5.1038.

    Article  PubMed  Google Scholar 

  30. 30.

    Jones J. The effects of non-response on statistical inference. J Health Soc Policy. 1996;8(1):49–62. https://doi.org/10.1300/J045v08n01_05.

    CAS  Article  PubMed  Google Scholar 

  31. 31.

    Galea S, Tracy M. Participation rates in epidemiologic studies. Ann Epidemiol. 2007;17(9):643–53. https://doi.org/10.1016/j.annepidem.2007.03.013.

    Article  PubMed  Google Scholar 

  32. 32.

    Groves RM. Nonresponse rates and nonresponse bias in household surveys. Public Opin Q. 2006;70(5):646–75. https://doi.org/10.1093/poq/nfl033.

    Article  Google Scholar 

  33. 33.

    Pace CR. Factors influencing questionnaire returns from former university students. J Appl Psychol. 1939;23(3):388–97. https://doi.org/10.1037/h0063286.

    Article  Google Scholar 

  34. 34.

    Lin I-F, Schaffer NC. Using survey participants to estimate the impact of nonparticipation. Public Opin Q. 1995;59:239–58.

    Article  Google Scholar 

  35. 35.

    Donald MN. Implications of nonresponse for the interpretation of mail questionnaire data. Public Opin Q. 1960;24(1):99. https://doi.org/10.1086/266934.

    Article  Google Scholar 

  36. 36.

    Ferber R. The problem of Bias in mail returns: a solution. Public Opin Q. 1948;12(4):669. https://doi.org/10.1086/266009.

    Article  Google Scholar 

  37. 37.

    Wellman JD, Hawk EG, Roggenbuck JW, Buhyoff GJ. Mailed questionnaire surveys and the reluctant respondent: an empirical examination of differences between early and late respondents. J Leis Res. 1980;12(2):164–73. https://doi.org/10.1080/00222216.1980.11969435.

    Article  Google Scholar 

  38. 38.

    Siemiatycki J, Campbell S. Nonresponse bias and early versus all responders in mail and telephone surveys. Am J Epidemiol. 1984;120(2):291–301. https://doi.org/10.1093/oxfordjournals.aje.a113892.

    CAS  Article  PubMed  Google Scholar 

  39. 39.

    Haring R, Alte D, Völzke H, Sauer S, Wallaschofski H, John U, et al. Extended recruitment efforts minimize attrition but not necessarily bias. J Clin Epidemiol. 2009;62(3):252–60. https://doi.org/10.1016/j.jclinepi.2008.06.010.

    Article  PubMed  Google Scholar 

  40. 40.

    Kypri K, Samaranayaka A, Connor J, Langley JD, Maclennan B. Non-response bias in a web-based health behaviour survey of New Zealand tertiary students. Prev Med. 2011;53(4–5):274–7. https://doi.org/10.1016/j.ypmed.2011.07.017.

    Article  PubMed  Google Scholar 

  41. 41.

    Kypri K, Stephenson S, Langley J. Assessment of nonresponse Bias in an internet survey of alcohol use. Alcohol Clin Exp Res. 2004;28(4):630–4. https://doi.org/10.1097/01.ALC.0000121654.99277.26.

    Article  PubMed  Google Scholar 

  42. 42.

    Meiklejohn J, Connor J, Kypri K. The effect of low survey response rates on estimates of alcohol consumption in a general population survey. PLoS One. 2012;7(4):e35527. https://doi.org/10.1371/journal.pone.0035527.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  43. 43.

    Boniface S, Scholes S, Shelton N, Connor J. Assessment of non-response Bias in estimates of alcohol consumption: applying the continuum of resistance model in a general population survey in England. PLoS One. 2017;12(1):e0170892. https://doi.org/10.1371/journal.pone.0170892.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  44. 44.

    Klingwort J, Buelens B, Schnell R. Early versus late respondents in web surveys: evidence from a national health survey. Stat J IAOS. 2018;34(3):461–71. https://doi.org/10.3233/SJI-170421.

    Article  Google Scholar 

  45. 45.

    Norwegian Digitalisation Agency. The common contact register, usage statistics [Available from: https://www.difi.no/rapporter-og-statistikk/nokkeltall-og-statistikk/digitalisering/kontakt-og-reservasjonsregisteret.

  46. 46.

    University of Oslo. Service for Sensitive Data [Available from: https://www.uio.no/english/services/it/research/sensitive-data/about/index.html.

  47. 47.

    Strand BH, Dalgard OS, Tambs K, Rognerud M. Measuring the mental health status of the Norwegian population: a comparison of the instruments SCL-25, SCL-10, SCL-5 and MHI-5 (SF-36). Nord J Psychiatry. 2003;57(2):113–8. https://doi.org/10.1080/08039480310000932.

    Article  PubMed  Google Scholar 

  48. 48.

    Saunders JB, Aasland OG, Babor TF, De La Fuente JR, Grant M. Development of the Alcohol Use Disorders Identification Test (AUDIT): WHO Collaborative Project on Early Detection of Persons with Harmful Alcohol Consumption-II. Addiction. 1993;88(6):791–804.

    CAS  Article  Google Scholar 

  49. 49.

    Bøen H, Dalgard OS, Bjertness E. The importance of social support in the associations between psychological distress and somatic health problems and socio-economic factors among older adults living at home: a cross sectional study. BMC Geriatr. 2012;12(1):27. https://doi.org/10.1186/1471-2318-12-27.

    Article  PubMed  PubMed Central  Google Scholar 

  50. 50.

    R Core Team. R: a language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2017.

    Google Scholar 

  51. 51.

    Venables WN, Ripley BD. Modern applied statistics with S. 4th ed. New York: Springer; 2002. https://doi.org/10.1007/978-0-387-21706-2.

    Book  Google Scholar 

  52. 52.

    Leeper TJ. margins: Marginal Effects for Model Objects. R package version 0.3.23; 2018.

    Google Scholar 

  53. 53.

    Fox J. Effect displays in R for generalised linear models. J Stat Softw. 2003;8(15):1–27.

    Article  Google Scholar 

  54. 54.

    Fox J, Weisberg S. An R Companion to Applied Regression. Thousand Oaks2019. Available from: http://tinyurl.com/carbook.

  55. 55.

    Wickham H. ggplot2: elegant graphics for data analysis. New York: Springer-Verlag; 2016. https://doi.org/10.1007/978-3-319-24277-4.

    Book  Google Scholar 

  56. 56.

    Statistics Norway. Educational attainment of the population [Internet] 2019 [Available from: https://www.ssb.no/en/utdanning/statistikker/utniv.

    Google Scholar 

  57. 57.

    Verlato G, Melotti R, Olivieri M, Corsico A, Bugiani M, Accordini S, et al. Asthmatics and ex-smokers respond early, heavy smokers respond late to mailed surveys in Italy. Respir Med. 2010;104(2):172–9. https://doi.org/10.1016/j.rmed.2009.09.022.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors wish to thank the Hordaland County Council, who were instrumental in funding, planning and implementing the 2018 Norwegian Counties Public Health Survey in Hordaland.

Funding

All authors completed this work within their normal employment at the Norwegian Institute of Public Health. No additional sources of funding were received.

Author information

Affiliations

Authors

Contributions

All authors were involved in planning the study. Data were collected by TSN, LEÅ and JCS. BC analysed the data and wrote the draft manuscript. All authors contributed to and approved the final manuscript.

Corresponding author

Correspondence to Benjamin Clarsen.

Ethics declarations

Ethics approval and consent to participate

The survey was approved by the Norwegian Data Protection Authority. This study is a secondary analysis of previously collected data, and according to the Norwegian Health Research Act, additional ethical approval was not required. The Norwegian Counties Public Health Survey core research group granted administrative permission to the authors to access the data.

Consent for publication

Not Applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Clarsen, B., Skogen, J.C., Nilsen, T.S. et al. Revisiting the continuum of resistance model in the digital age: a comparison of early and delayed respondents to the Norwegian counties public health survey. BMC Public Health 21, 730 (2021). https://doi.org/10.1186/s12889-021-10764-2

Download citation

Keywords

  • Epidemiologic methods
  • Selection bias
  • Surveys and questionnaires