Skip to main content

The development of a questionnaire to assess leisure time screen-based media use and its proximal correlates in children (SCREENS-Q)

Abstract

Background

The screen-media landscape has changed drastically during the last decade with wide-scale ownership and use of new portable touchscreen-based devices plausibly causing changes in the volume of screen media use and the way children and young people entertain themselves and communicate with friends and family members. This rapid development is not sufficiently mirrored in available tools for measuring children’s screen media use. The aim of this study was to develop and evaluate a parent-reported standardized questionnaire to assess 6–10-year old children’s multiple screen media use and habits, their screen media environment, and its plausible proximal correlates based on a suggested socio-ecological model.

Methods

An iterative process was conducted developing the SCREENS questionnaire. Informed by the literature, media experts and end-users, a conceptual framework was made to guide the development of the questionnaire. Parents and media experts evaluated face and content validity. Pilot and field testing in the target group was conducted to assess test-retest reliability using Kappa statistics and intraclass correlation coefficients (ICC). Construct validity of relevant items was assessed using pairwise non-parametric correlations (Spearman’s). The SCREENS questionnaire is based on a multidimensional and formative model.

Results

The SCREENS questionnaire covers six domains validated to be important factors of screen media use in children and comprises 19 questions and 92 items. Test-retest reliability (n = 37 parents) for continuous variables was moderate to substantial with ICC’s ranging from 0.67 to 0.90. For relevant nominal and ordinal data, kappa values were all above 0.50 with more than 80% of the values above 0.61 indicating good test-retest reliability. Internal consistency between two different time use variables (from n = 243) showed good correlations with rho ranging from 0.59 to 0.66. Response-time was within 15 min for all participants.

Conclusions

SCREENS-Q is a comprehensive tool to assess children’s screen media habits, the screen media environment and possible related correlates. It is a feasible questionnaire with multiple validated constructs and moderate to substantial test-retest reliability of all evaluated items. The SCREENS-Q is a promising tool to investigate children screen media use.

Peer Review reports

Background

The screen-media landscape has changed drastically during the last decade in many families with children. While the television (TV) and gaming consoles have been in the household among the majority of families for decades, the wide-scale ownership and use of new portable touchscreen-based devices as smartphones and tablets and the available applications for these devices may plausibly have changed the screen use volume and the way young people entertain themselves and communicate with friends and family members. The characteristics of the screen media environment in a typical household now often include multiple devices; TV, gaming console, smartphone, tablet, and personal computer including multiple applications with passive or interactive abilities [1]. Thus, today’s screen media are now more multifaceted and complex in nature than just a few years ago.

In recent years, researchers have become increasingly interested in investigating what determines screen media use (SMU) and the possible long-term consequences of excessive SMU. Different instruments and questionnaires have been used to assess screen time, but we are unaware of questionnaires that asses the broad screen media environment that also include use of specific media content [2], family screen media rules and other screen media habits. Most questionnaires have investigated either screen time [3,4,5], or media content [6,7,8] and the majority of the studies have addressed only TV time and computer use, and do not include screen use from other devices such as smartphones and tablets [2]. Furthermore, the target group in some of these studies have been infants or children too young to control the media use by themselves; thus measuring their exposure to screen media through their parents’ media use [2,3,4, 9]. Studies addressing children’s screen-time or content suggest that the content might have a greater influence on health outcomes in youth rather than the actual amount of screen-time [2, 10]. Few studies have reported test-retest reliability and validity results in screen time questionnaire instruments [11,12,13,14] but evaluations have been limited to TV- and computer time and no studies have examined the metric properties of items that attempt to capture children’s screen media use of today. One screen time questionnaire was developed with its primary focus on video- and computer gaming and showed good test-retest reliability among college students [15].

A few larger studies have used instruments which includes time spent on smartphones and tablets [16,17,18], one of these among children 6 months to 4 year old children [17], Nathanson [18] on 3 to 5 years old and their sleep behaviors, and the most recent study among 15 year old children [16]. However, these questionnaire instruments have not been reported systematically developed or thoroughly validated. Also, a few studies have included items to assess time spent on smartphones and tablets among toddlers or preschool children [3, 17, 18] or adolescents [16] in order to quantify time spent on screen devices. To our knowledge none of these are reported to be systematically developed or validated. Furthermore, no validated questionnaire has addressed SMU in a broader context including time spent on different devices and platforms, the context of use, the home-environment, and screen media behavior of children.

To further progress the research area of SMU and its relation to health of children and young people, a new comprehensive instrument is needed to assess children’s screen habits in addition to the broad screen media family environment that children grow up in. A new instrument will help improve the efforts to conduct more rigorous quality observational studies. The aim of this study was to develop a parent-reported standardized questionnaire to asses 6–10-year old children’s leisure time SMU and habits, that also captures the screen media environment surrounding the child. The characterization of the screen media environment also includes putative screen media specific correlates. Accurately measuring such proximal correlates in observational studies may assist in identifying possible targets for interventions that aim to change children’s screen media use. In this paper we describe the development of the SCREENS questionnaire (SCREENS-Q) including an examination of its reliability, internal consistency of independent sub-scales, and qualitative and quantitative item analyses of the final questionnaire in an independent sample of parents of children representing the target population.

Methods

Initially a steering group was established to conduct and assess the process of developing the tool for measuring children’s SMU. Further, parents representing the target group and a panel of Danish media experts were recruited as key informants and in initial validation of the questionnaire.

Scientific steering group

A steering group was formed to initiate and guide the development of a questionnaire to asses children’s screen media environment and its plausible proximal correlates. Members of the Steering group are the authors of this paper and are all part of the academic staff at the Center for Research in Childhood Health and Research unit for Exercise Epidemiology at the University of Southern Denmark. The steering group was responsible for initial item generation, the selection of parents and media experts, design and distribution of the questionnaire, analyses of responses from parents and experts, and drafting the final questionnaire.

Key informants – parents

A convenience sample consisting of 10 parents of 6–8-year-old children (this age-range was chosen for convenience and because we wanted to address the youngest) were recruited for the face- and content validation interviews of the first draft of the questionnaire. Inclusion criteria for this convenience sample were being parents of children between 6 and 8-years of age (pre-school or first grade) with a regular use of screen media and we wanted parents of both boys and girls. An email invitation letter was sent to parents from public school in the local area of the city of Odense (Denmark) containing written participant information.

Key informants – screen media experts

Ten Danish media experts were recruited to evaluate a draft of the SCREENS-Q. Criteria for being an “expert in the field of SMU” were: having published peer-reviewed scientific papers on the topic, or having authored books on media use, or being involved with descriptive national reporting of media use in Denmark. We were aware of recruiting experts of both genders, and publicly were advocates, opposed, or neutral towards children’s heavy use of screen media. The final panel of Danish media experts were representing areas of psychology, media, communication, journalism and medicine (see list of media experts in acknowledgement).

Steps of development

The developing process of the SCREENS-Q was accomplished through an iterative process with several intertwining steps (see Fig. 1). Based on solid methodological literature [19,20,21], the initial process comprised the following steps: 1. Definition and elaboration of the construct to be measured, 2. Selection and formulation of items, 3. Pilot testing for face and content validity (parents and screen media experts), 4. Field testing in a sample of parents with children 7 years of age for test-retest reliability, and 5. Another field test in a larger sample of the target group from Odense Child Cohort (OCC) for assessment of construct validity and item analysis for final scale evaluation.

Fig. 1
figure1

Illustration of the iterative process of the development and validation of the SCREENS-Q

Steps 1, 2 and 3 were primarily qualitative evaluations. They were conducted as an iterative process in close collaboration with parents of 6–8-year-old children, the scientific steering group, and Danish screen media experts. Step 4 and 5 were primarily a quantitative evaluation of the test-retest reliability and analysis of item correlation and response distributions.

Defining the construct and initial generation of items (steps 1 and 2)

With the SCREENS-Q we aimed to measure children’s SMU (time and content) and specific screen-media behavior, the screen media home environment including important putative proximal correlates of children’s SMU. Several methods were used to identify relevant factors of these constructs. For the proximal correlates the scientific steering group initially established a theoretical model based on a socio-ecological model, to provide a foundation to define and comprehensively understand how various factors that may determine children’s media use are interrelated (see Fig. 2). The socio-ecological model worked as a unifying framework for identifying and investigating potential correlates of children’s SMU. Subsequently, a literature search identified and supplemented constructs from the socio-ecological model [22, 23]. Based on this model we also included relevant questionnaire items from former or ongoing studies [13, 16, 17, 24,25,26,27].

Fig. 2
figure2

Socio-ecological model illustrating potential correlates of children’s screen media use

With the SCREENS-Q we aimed to assess possible direct and indirect causal factors that may influence children’s SMU. The questionnaire is multidimensional and based on a formative model [21, 28] meaning that it is intended to cover and measure all indicators that might possibly contribute to the construct “children’s SMU”. Potential causal factors may have different impacts but in a formative perspective we aimed also to identity factors with little impact. Therefore, in the initial phase we attempted to obtain a comprehensive analysis of the construct [20, 21, 28] to generate a broad list of domains and items, that were not necessarily correlated. Reduction of redundant items was carried out in later steps during pilot and field testing [20, 21, 28].

The amount of questions and items within each domain is not necessarily an expression of importance or weighing of the specific domain, but rather a question of meaningful wording and/or how accurately we wanted to measure the specific domain. This first version of SCREENS-Q was developed to use in a large ongoing birth cohort Odense Child Cohort (OCC). Therefore, relevant demographic, social and health behavior questions are obtained from measures and questionnaires in OCC (i.e. family structure, religious and ethnic origin, TV in the bedroom, other health related variables, attendance of institutions, socioeconomic status).

Pilot testing: face and content validity (step 3)

A first draft of SCREENS-Q was developed based on the socio-ecological model, and face- and content validation was tested in a convenience sample of key informants. Ten parents of children aged 6–8 years filled out the questionnaire, while researchers were present for eventual questions about understanding and interpretation of wording of the questionnaire. Right after completing the questionnaire a semi-structured interview was conducted on relevance, importance, and if some important domains or areas of children’s SMU were missing. The key informant interviews were recorded and transcribed. Every item in the questionnaire was analyzed and revised based on the interviews, in relation to wording, understanding, interpretation, relevance and coverage for SMU in the sample of parents. Relevant changes were adapted after careful consideration in the steering group.

Fifteen Danish media experts unaffiliated with the study were contacted by telephone, informed about the project and asked if they were willing to evaluate the SCREENS-Q questionnaire. Ten of the 15 experts agreed to participate. An updated draft was sent to the ten media experts for another evaluation of face and content validity. The experts received an email with a brief description of the aim of our project, the purpose of the questionnaire, and a link to the online questionnaire in SurveyXact. They were asked not to fill it out, but to comment on every item and/or domain in the questionnaire. They were also asked to comment on wording, understanding and relevance for each item. Finally, they were asked whether the domains in the questionnaire adequately covered all significant areas of children’s use of screen media. Based on the responses and subsequent discussion in the steering group the questionnaire was further refined, and some items were modified, deleted, or added to the questionnaire.

The experience from these first steps were discussed in the scientific steering group and a final draft for field testing in the target group now comprised a questionnaire covering 6 domains, 19 questions summing up to 96 items about children’s SMU and potential correlates (see Table 1 for Domains, questions and items included in the SCREENS-Q). Step 1–3 was conducted as an iterative process from February to July 2017.

Table 1 Domains of screen-media use and proximal correlates included in the SCREENS-Q, example questions and response categories

Field testing in the target group (step 4 and 5)

Step 4: examination of test-retest reliability

Another convenience sample was recruited from schools (1st grade and 2nd grade) in the municipalities of Odense and Kerteminde, Denmark. Inclusion criteria were: 1) Being parents to children at 7–9 years of age, and 2) the child must have access to- and use minimum two of the following screen media devices in the household: Tablet, smartphone, TV, gaming console or computer. In total 35 parents agreed to participate in this field testing for test-retest reliability. The questionnaire was sent to the parents, and responses collected electronically. The participants were asked to fill out the SCREENS-Q twice, separated by a two-week interval. Step 4 was conducted in November 2017 and December 2017.

Step 5: construct validity and item analysis

After evaluating test-retest reliability in the convenience sample, the SCREENS-Q was implemented in OCC, an ongoing closed birth cohort initiated in 2010–2012 [24]. The evaluation of construct validity was done with two items measuring screen time (item 9 and 13 from n = 243). Furthermore, item analysis (based on descriptive analysis of data and qualitative evaluation of response patterns and feedback), and feasibility (willingness and time to fill out the SCREENS-Q) was evaluated on a subsample of parents from the cohort (n = 243). Items would be deemed redundant if they had too little variation. Item responses were analyzed to investigate whether any answer categories were unexpectedly unused/not answered and therefore seemed redundant or irrelevant. Participating parents were asked to fill out the SCREENS-Q on a tablet while attending the 7-year old examination at the hospital. If the child did not attend the planned 7-year old examination the questionnaire was sent to the parents by email. Step 5 was conducted on data from ultimo November 2017 to primo March 2018.

Data management

The questionnaire was distributed and answered online. In the pilot testing (step 3) SurveyXact was used for management and initial response analysis. For the field testing, a professional data entry organization (Open Patient data Explorative Network) entered the data in the survey/data management software REDCap. A series of range and logical checks were undertaken to clean the data.

Statistical methods

To determine test–retest reliability selected relevant items were compared between the first and second administrations of SCREENS-Q during field testing (n = 35). For categorical/binominal variables (questions 4, 5 and 11) levels of agreement were determined using Kappa coefficients which were defined as poor/slight (κ = 0.00–0.20), fair (κ = 0.21–0.40), moderate (κ = 0.41–0.60), substantial (κ = 0.61–0.80) and almost perfect (κ = 0.81–1.00) [29]. Reliability for questions on an ordinal scale (item 3, 6, 17 and 11) was assessed using weighted kappa and/or intra-class correlation (ICC) as these estimates are identical if the weights in kappa are quadratic [28]. To avoid excluding items with a low Kappa value despite showing a high percent agreement (due to a high percent of responses in one category, creating instability in the Kappa statistic) it was decided that items with a κ > 0.60 and/or percent agreement ≥60% were considered to have acceptable reliability [30, 31].

Test-retest reliability of continuous variables (item 9, 13 and 19) was evaluated by calculating ICC and standard error of measurement. An ICC value of 0.75 or higher was considered to represent a good level of agreement. ICC values of 0.50–0.74 were considered to represent moderate reliability and those below 0.50 represented poor reliability. Bland-Altman plots were created, and 95% limits of agreement calculated to investigate agreement between the first and second administration of the SCREENS-Q for continuous variables.

As SCREENS-Q is a multidimensional tool based on a formative model, item analyses were primarily done by qualitative evaluation of distributions and usefulness. Factor analysis and definition of internal consistency does not apply, as items are not assumed to be internally correlated in a formative model [21]. This applies to all items in the questionnaire except from questions 9 and 13 that each can be summarized to provide a total screen time use variable. Construct validity is about consistency – not accuracy [19, 28]. Thus construct validity of these questions was assessed using pairwise non-parametric correlations (Spearman’s) and 95% CI calculated by bootstrap estimations with 1000 reps [32].

All analyses were conducted in Stata/IC 15.

Results

Following the five iterative steps of developing, field testing and evaluating validity and reliability resulted in a final version of SCREENS-Q comprising six domains, nineteen questions and 92 items representing factors from individual, social (interpersonal) and physical (home) environment of our socio-ecological model. (see Table 1 and Additional file 1 (Danish) and Additional file 2 (English) for the full version).

Validity

Our two groups of key informants 1) ten parents (aged 30–49, 70% mothers) of ten children 6–8 years of age (6 boys and 4 girls) and 2) ten Danish media-experts (6 females, 4 males) from very different areas of education, including the scientific fields of psychology, media, communication, journalism and medicine, evaluated face and content validity. Wording, understanding, interpretation and coverage was confirmed by both groups. Parents suggested that one originally drafted item about “whether the child is using the media on and off, and thereby having many, but short, bouts of SMU”, was not an issue for children 6–8 years of age, so that question was omitted. They also suggested that SMU during school hours (educational wise) might influence leisure time SMU. Based on the key informant interviews additional items were added (questions 6, 7, 8 and 8.1). The media experts’ biggest concern was that it could be difficult to capture children’s SMU and behavior or its determinants in a questionnaire because of the complexity of screen media behavior. Some experts emphasized that we should be aware of also aiming to capture the positive effects of SMU. Thus, question 16 was expanded with several items addressing possible positive effects of SMU (items 16.3, 16.4, 16.5, 16.6, 16.7, 16.9, 16.10, 16.11). Other experts emphasized the importance of asking about relations and context (i.e.: who is the child with during SMU, rules for SMU etc.). Therefore, question 11, 14 and 15 were refined and extended. The final version of SCREENS-Q contains six domains validated to be important factors of defining the construct of “children’s SMU and behavior”. Five of the six domains address the child’s SMU, screen media preferences (device, content), screen media behavior (when and with whom and on what device and platform) and the screen home environment and comprises 16 questions and 77 items, and one domain addresses the parents SMU in the home (two questions and 15 items) (see Additional file 1 for the full SCREENS-Q).

Construct validity was assessed by investigating internal consistency between two different questions asking about children’s SMU (time in hours and minutes) on a typical weekday and a weekend day (question 9 and question 13) in the second field test with n = 243 parents.

Spearman’s rho showed good correlation (rho ranging from 0.59 to 0.66) between the two different ways of measuring time spend on screen medias (Table 2).

Table 2 Construct validity (question 9 measured against question13 (n = 243))

Reliability

Test-retest reliability was investigated in the convenience sample of thirty-five parents. Thirty-five completed Q1 and of these n = 31 responded to Q2, which gave us n = 31 eligible for the test-retest reliability analysis. Of the 31 parents (25 mothers and 6 fathers of 19 boys and 12 girls) who filled out SCREENS-Q twice, n = 11 completed the questionnaire within the expected 2–2½ weeks, n = 11 within 3–3½ weeks and n = 9 after 4–4½ week. Mean time to follow up was 22.5 (SD 6.5) days.

For continuous variables (questions and items 9,13,18.1 and 19) ICC and BA plots were calculated and test-retest reliability was considered moderate to good as ICC for all examined items were between 0.67 to 0.90 (see Table 3, where also Standard Errors of Mean, Mean differences and Limits of agreement are presented). For all other items, kappa or weighted kappa, and observed agreement were calculated if applicable and showed high values for reliability. All kappa values for items included in the present version of the questionnaire were all above 0.50 and 80% of the kappa values were above 0.61 indicating good test-retest reliability, ranging from moderate to substantial. Less than 10% of the items/questions returned low kappa values despite high observed agreement due to too high response in one category. None of the observed agreement values were below 60% and the majority (65%) showed an observed agreement value ≥90% (see Table 4 for an overview).

Table 3 Test-retest reliability of child- and parent screen time use (N = 31)
Table 4 Summary of reliability assessment by domains

Item analysis

Data for item analysis is from n = 243 parents (n = 60 fathers, n = 182 mothers, n = 1 “relation to the child” not stated) of n = 243 children (n = 142 boys, n = 101 girls) participating in the OCC. The majority (48%) of the parents had a first stage tertiary education (short or long college, i.e. bachelor’s degree), 26 percentage had completed higher education at university (i.e. master’s degree or above) and 26 percentage of the parents had completed upper secondary school or a vocational education.

Response rate and completeness was high (98.4%) and quantitative and qualitative item analysis did not suggest further deletion or modification of items. However, the parents of the cohort (n = 243) primarily filled out the questionnaire during the time, the child underwent the biennial examination in the cohort, which gave them the opportunity to ask for further explanation when answering. A few parents felt that the question about age of first regular daily SMU was hard to answer and did not make as much sense for them as “age when the child had its own device”. These questions about first regular daily SMU showed good, but slightly lower kappa values (0.60 and 0.81, respectively) than questions reporting age when the child owned its first personal device (smartphone or tablet) (kappa value 0.89 and 0.97). It seemed harder for parents to estimate age of first regular daily screen media as disagreement between first and second response could differ 2 years. Therefore, in the final version of SCREENS-Q, questions about age of first regularly daily are replaced with two more questions about age, when the child had its own device (personal computer and/or laptop).

In question 4 we asked: “Thinking about the last month which of the following devices have your child used?” Respondents had two possible response categories: “yes/no” for each of the displayed screen devices, which gave very limited information about the child’s actual use. Based on the modest variation in response in the sample we decided to modify and expand the question to “How often has the child used the following screen media devices in the household within the past month?” and include five response categories (1. “Every day or almost every day of the week”, 2. “4-5 days per week”, 3. “2-3 days per week”, 4. “1 day or less per week”, and 5. “Never”).

For question 11 about rules for media use we initially had nine statements about rules with different formulations like “the child can decide on its own how much time it spends on screen media” and another phrased like “we have firm rules for how much time the child can spend on screen media”. We tested the overall agreement of these somewhat similar questions to whether parents responded in a similar way to these phrasing. Overall agreement was high (from 67.74 to 90.32%) and we decided that the final version should include only five of them. The reliability of test-retest of all items about rules (question 11) was moderate to substantial with kappa values from 0.71–0.79 (For a single item (11.b) only fair k 0.30) (see Table 4 for a summary of reliability measures by domains).

Feasibility

Feasibility in the field test sample was considered good as all parents (n = 243) present for the child’s 7-year-old examination, and n = 239 (98.4%) completed the questionnaire without any missing answers. All completed questionnaires were completed within 15 min if the completion was not interrupted by other tasks.

Discussion

The main focus of this paper was to describe the development of SCREENS-Q designed to assess 6–10-year old children’s leisure time screen media use and behavior, the screen media environment, and its proximal correlates, and to determine multiple domains of its validity and test-retest reliability. It was developed based on a conceptual model informed by literature and face and content validity were established by involving screen media experts and end users (parents) of the SCREENS-Q in an iterative process. Internal consistency was assessed by field test in a larger sample and estimated high for screen media time use and test-retest reliability was moderate to substantial for all items. Overall, the SCREENS-Q provides an up-to-date standardized questionnaire for parent reported assessment of children’s leisure time SMU and habits and possible screen media specific correlates.

To our knowledge SCREENS-Q is the first questionnaire battery to comprehensively assess children’s screen media habits, the screen media home environment, potential determinants of habits that may assist in identifying possible targets for intervention. These include possible individual level factors, home- and interpersonal environmental level factors, and a few school/neighborhood/community level factors according to our suggested socio-ecological model of child screen media habits.

Test-retest reliability of the SCREENS-Q was moderate to substantial (ICC ranging from 0.67 to 0.90) for all items which is similar but higher than the “acceptable or better” test-retest reliability of screen media questions in the HAPPY study by Hinkley et al. (ICC ranging from 0.31 to 0.84) [27]. This difference might be due to more detailed and accurate answer categories in the SCREENS-Q (hours and minutes within 15 min for each screen media device and activity) compared to the HAPPY study where parents of preschoolers were asked to estimate time in hours that their child engaged in screen behaviors on a typical weekday and weekend day.

There are limitations to this study that need to be addressed and considered when interpreting the results and application of the SCREENS-Q. The SCREENS-Q was developed as a parent reported questionnaire for children aged 6–10 years of age. Proxy-reporting by parents can have mixed validity; for example parent reporting of children’s pain [33] has been reported to have low agreement, while parent reporting of health related quality of life in children has shown to be valid and reliable [34]. Questions assessing time spent participating in specific behaviors may be particularly difficult to accurately report [35]. Parents’ awareness of and ability to accurately recall the time their child spends in a specific behavior might be limited and thus the answers prone to lower validity and reliability than objectively measured behavior. In addition, parent reporting may be prone to underestimation due to social desirability response bias as many parents today may consider children’s high SMU outside the social norm [36].

Another limitation is the relatively small non-population-based sample of parents for test-retest reliability. Most reliability measures are sample dependent, and despite internal consistency assessment and reliability measures showed moderate to high agreement and reliability for all items, these estimates may not reflect the general target population [21]. The average time between the administrations of the two questionnaires were 22.5 days and the difference between the responses may also represent true differences.

This first version of SCREENS-Q was developed in corporation with Danish parents and media experts to especially capture the SMU of 6–10-year-old Danish children and only tested on 7–8-year-old children in this study. In accordance with suggested age-limits of self-report in children [37, 38] we believe that from the age of approximately 11 years children will be able to self-report their screen use and behavior with higher accuracy compared with parental report. Therefore, a self-report version of the SCREENS-Q for older children and young people will need to be developed and evaluated. We collected data on parental screen media use, as we hypothesized that parents screen media use might be a determinant of their child’s screen media use. For pragmatic reasons we only asked the attending parent about screen media use, which might be a limitation as mothers and fathers screen media use could relate differently to the child’s media use. To fully address parental screen media use as a possible determinant in a future study it is possible to administer the two questions to each parent.

The generalizability might be restricted to young Danish children and future investigations of validity and reliability in other samples, nationalities and cultures are warranted. School policy on SMU during school time might also have an impact on children’s leisure time SMU. That domain is not well covered in the SCREENS-Q. Furthermore, although the questionnaire included numerous items it was completed quickly by all respondents if completed uninterrupted. Yet, in the population-based field test sample, the majority of parents, were well educated. Thus, it remains unclear if less educated parents would have similar answers, completeness and response times. Finally, although we conducted a comprehensive analysis of the construct to generate a broad list of domains and items, the questionnaire may still lack coverage of some elements of the possible proximal correlates of children’s screen media use.

The strength of this study is the careful conceptual development, involving experts and end users, and that the questionnaire concurrently covers wide domains of screen media behavior and factors that might influence children’s SMU.

Conclusion

The SCREENS-Q was developed to meet the research needs of a comprehensive tool to assess screen habits of children and the possible screen media related correlates based on a socio-ecological perspective. We have developed a feasible questionnaire and validated multiple constructs and found moderate to substantial test-retest reliability of all inspected items. Conclusively, the SCREENS-Q is a promising tool to investigate children’s SMU. We are planning a future study to carefully examine the criterion validity of the time use items, and we are currently collecting SCREENS-Q data in a population-based sample to examine the relationship of the proximal correlates of screen media habits with SMU in children.

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CI:

Confidence interval

ICC:

Intraclass correlation coefficient

OCC:

Odense Child Cohort

SCREENS-Q:

SCREENS questionnaire

SEM:

Standard Error of Mean

SMU:

Screen media use

TV:

Television

References

  1. 1.

    Danmarks Statistik: Elektronik i hjemmet 2019 [Available from: https://www.dst.dk/da/Statistik/emner/priser-og-forbrug/forbrug/elektronik-i-hjemmet.

    Google Scholar 

  2. 2.

    Barr R, Danziger C, Hilliard M, Andolina C, Ruskis J. Amount, content and context of infant media exposure: a parental questionnaire and diary analysis. Int J Early Years Educ. 2010;18(2):107–22.

    PubMed  PubMed Central  Article  Google Scholar 

  3. 3.

    Asplund KM, Kair LR, Arain YH, Cervantes M, Oreskovic NM, Zuckerman KE. Early childhood screen time and parental attitudes toward child television viewing in a low-income Latino population attending the special supplemental nutrition program for women, infants, and children. Child Obes. 2015;11(5):590–9.

    PubMed  PubMed Central  Article  Google Scholar 

  4. 4.

    Goh SN, Teh LH, Tay WR, Anantharaman S, van Dam RM, Tan CS, et al. Sociodemographic, home environment and parental influences on total and device-specific screen viewing in children aged 2 years and below: an observational study. BMJ Open. 2016;6(1):e009113.

    PubMed  PubMed Central  Article  Google Scholar 

  5. 5.

    Grontved A, Singhammer J, Froberg K, Moller NC, Pan A, Pfeiffer KA, et al. A prospective study of screen time in adolescence and depression symptoms in young adulthood. Prev Med. 2015;81:108–13.

    PubMed  Article  Google Scholar 

  6. 6.

    Anderson P, de Bruijn A, Angus K, Gordon R, Hastings G. Impact of alcohol advertising and media exposure on adolescent alcohol use: a systematic review of longitudinal studies. Alcohol Alcohol. 2009;44(3):229–43.

    PubMed  Article  Google Scholar 

  7. 7.

    L'Engle KL, Brown JD, Kenneavy K. The mass media are an important context for adolescents' sexual behavior. J Adolesc Health. 2006;38(3):186–92.

    PubMed  Article  Google Scholar 

  8. 8.

    Strasburger VC. Alcohol advertising and adolescents. Pediatr Clin N Am. 2002;49(2):353–76 vii.

    Article  Google Scholar 

  9. 9.

    Mendelsohn AL, Berkule SB, Tomopoulos S, Tamis-LeMonda CS, Huberman HS, Alvir J, et al. Infant television and video exposure associated with limited parent-child verbal interactions in low socioeconomic status households. Arch Pediatr Adolesc Med. 2008;162(5):411–7.

    PubMed  PubMed Central  Article  Google Scholar 

  10. 10.

    Casiano H, Kinley DJ, Katz LY, Chartier MJ, Sareen J. Media use and health outcomes in adolescents: findings from a nationally representative survey. J Can Acad Child Adolesc Psychiatry. 2012;21(4):296–301.

    PubMed  PubMed Central  Google Scholar 

  11. 11.

    Hinkley T, Verbestel V, Ahrens W, Lissner L, Molnar D, Moreno LA, et al. Early childhood electronic media use as a predictor of poorer well-being: a prospective cohort study. JAMA Pediatr. 2014;168(5):485–92.

    PubMed  Article  Google Scholar 

  12. 12.

    Hinkley T, Timperio A, Salmon J, Hesketh K. Does preschool physical activity and electronic media use predict later social and emotional skills at 6 to 8 years? A cohort study. J Phys Act Health. 2017;14(4):308–16.

    PubMed  Article  Google Scholar 

  13. 13.

    Schmitz KH, Harnack L, Fulton JE, Jacobs DR Jr, Gao S, Lytle LA, et al. Reliability and validity of a brief questionnaire to assess television viewing and computer use by middle school children. J Sch Health. 2004;74(9):370–7.

    PubMed  Article  Google Scholar 

  14. 14.

    Hardy LL, Booth ML, Okely AD. The reliability of the adolescent sedentary activity questionnaire (ASAQ). Prev Med. 2007;45(1):71–4.

    PubMed  Article  Google Scholar 

  15. 15.

    Tolchinsky A. Development of a self-report questionnaire to measure problematic video game play and its relationship to other psychological phenomena Master's theses and doctoral dissertations, and graduate capstone projects: Eastern Michigan University; 2013.

  16. 16.

    Przybylski AK, Weinstein N. A large-scale test of the goldilocks hypothesis. Psychol Sci. 2017;28(2):204–15.

    PubMed  Article  Google Scholar 

  17. 17.

    Kabali HK, Irigoyen MM, Nunez-Davis R, Budacki JG, Mohanty SH, Leister KP, et al. Exposure and use of Mobile media devices by young children. Pediatrics. 2015;136(6):1044–50.

    PubMed  Article  Google Scholar 

  18. 18.

    Nathanson AI, Beyens I. The relation between use of Mobile electronic devices and bedtime resistance, sleep duration, and daytime sleepiness among preschoolers. Behav Sleep Med. 2018;16(2):202–19.

    PubMed  Article  Google Scholar 

  19. 19.

    Mokkink LB, Terwee CB, Patrick DL, Alonso J, Stratford PW, Knol DL, et al. The COSMIN checklist for assessing the methodological quality of studies on measurement properties of health status measurement instruments: an international Delphi study. Qualy Life Res. 2010;19(4):539–49.

    Article  Google Scholar 

  20. 20.

    Spruyt K, Gozal D. Development of pediatric sleep questionnaires as diagnostic or epidemiological tools: a brief review of dos and don'ts. Sleep Med Rev. 2011;15(1):7–17.

    PubMed  Article  Google Scholar 

  21. 21.

    De Vet HC, Terwee CB, Mokkink LB, DL K. Measurement in medicine: a practical guide. 1st ed. United States of America, New York: Cambridge University Press; 2011.

    Google Scholar 

  22. 22.

    Paudel S, Jancey J, Subedi N, Leavy J. Correlates of mobile screen media use among children aged 0-8: a systematic review. BMJ Open. 2017;7(10):e014585.

    PubMed  PubMed Central  Article  Google Scholar 

  23. 23.

    Ye S, Chen L, Wang Q, Li Q. Correlates of screen time among 8-19-year-old students in China. BMC Public Health. 2018;18(1):467.

    PubMed  PubMed Central  Article  Google Scholar 

  24. 24.

    Kyhl HB, Jensen TK, Barington T, Buhl S, Norberg LA, Jorgensen JS, et al. The Odense child cohort: aims, design, and cohort profile. Paediatr Perinat Epidemiol. 2015;29(3):250–8.

    PubMed  Article  Google Scholar 

  25. 25.

    Hestbaek L, Andersen ST, Skovgaard T, Olesen LG, Elmose M, Bleses D, et al. Influence of motor skills training on children's development evaluated in the motor skills in PreSchool (MiPS) study-DK: study protocol for a randomized controlled trial, nested in a cohort study. Trials. 2017;18(1):400.

    PubMed  PubMed Central  Article  Google Scholar 

  26. 26.

    Jongenelis MI, Scully M, Morley B, Pratt IS, Slevin T. Physical activity and screen-based recreation: Prevalences and trends over time among adolescents and barriers to recommended engagement. Prev Med. 2018;106:66–72.

    PubMed  Article  Google Scholar 

  27. 27.

    Hinkley T, Salmon J, Okely AD, Crawford D, Hesketh K. The HAPPY study: development and reliability of a parent survey to assess correlates of preschool children's physical activity. J Sci Med Sport. 2012;15(5):407–17.

    PubMed  Article  Google Scholar 

  28. 28.

    Streiner DL, Norman GR. Health Measurement Scales - a pracitical guide to their development and use. 3rd ed. London: Oxford University Press; 2003.

  29. 29.

    Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–74.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  30. 30.

    Cicchetti DV, Feinstein AR. High agreement but low kappa: II. Resolving the paradoxes. J Clin Epidemiol. 1990;43(6):551–8.

    CAS  PubMed  Article  Google Scholar 

  31. 31.

    Feinstein AR, Cicchetti DV. High agreement but low kappa: I. the problems of two paradoxes. J Clin Epidemiol. 1990;43(6):543–9.

    CAS  PubMed  Article  Google Scholar 

  32. 32.

    Kirkwood BRS, J. A .C. Essential Medical Statistics. 2nd ed. London: Blackwell Science Ltd; 2003.

  33. 33.

    Zhou H, Roberts P, Horgan L. Association between self-report pain ratings of child and parent, child and nurse and parent and nurse dyads: meta-analysis. J Adv Nurs. 2008;63(4):334–42.

    PubMed  Article  Google Scholar 

  34. 34.

    Varni JW, Limbers CA, Burwinkle TM. Parent proxy-report of their children's health-related quality of life: an analysis of 13,878 parents' reliability and validity across age subgroups using the PedsQL 4.0 Generic Core Scales. Health Qual Life Outcomes. 2007;5:2.

    PubMed  PubMed Central  Article  Google Scholar 

  35. 35.

    Veitch J, Salmon J, Ball K. The validity and reliability of an instrument to assess children's outdoor play in various locations. J Sci Med Sport. 2009;12(5):579–82.

    PubMed  Article  Google Scholar 

  36. 36.

    Thompson JL, Sebire SJ, Kesten JM, Zahra J, Edwards M, Solomon-Moore E, et al. How parents perceive screen viewing in their 5-6 year old child within the context of their own screen viewing time: a mixed-methods study. BMC Public Health. 2017;17(1):471.

    PubMed  PubMed Central  Article  Google Scholar 

  37. 37.

    Leeuw ED. Improving data quality when surveying children and adolescents: cognitive and social development and its role in questionnaire construction and pretesting; 2011.

    Google Scholar 

  38. 38.

    Shiroiwa T, Fukuda T, Shimozuma K. Psychometric properties of the Japanese version of the EQ-5D-Y by self-report and proxy-report: reliability and construct validity. Qual Life Res. 2019;28(11):3093–105.

    CAS  PubMed  PubMed Central  Article  Google Scholar 

  39. 39.

    Standardization ECf. EN ISO 17100:2015. BSI Standards Limited 2015 2015. https://www.en-standard.eu/din-en-iso-17100-translation-services-requirements-for-translation-services-iso-17100-2015/?gclid=EAIaIQobChMIktjQ99Gh6QIVweF3Ch0CgAYrEAMYASAAEgJss_D_BwE.

  40. 40.

    Beaton DE, Bombardier C, Guillemin F, Ferraz MB. Guidelines for the process of cross-cultural adaptation of self-report measures. Spine (Phila Pa 1976). 2000;25(24):3186–91.

    CAS  Article  Google Scholar 

Download references

Acknowledgements

We are very grateful to all the parents who took the time to fill put the questionnaire for this development and validation study. And to the Danish media experts who took the time to assess the coverage of the SCREENS: Helle S. Jensen: associate proff., Department of Culture and Society, Astrid Haug: associate proff Aarhus University, Department of Communication and Culture, Stine L. Johansen: Associate professor, Department of Information and Media Studies, Faculty of Humanities, Aarhus University, Jonas Ravn, Center for Digital Pædagogik, Master in Media studies, Aarhus University, Morten M Fenger: PhD, Psychology researcher, Jesper Balslev: associate proff. Department of Communication and Arts, Roskilde University,Tina S Gretlund: Media researcher. Data protection Officer and Head of Data Ethics, DR, Master in Media studies, University of Copenhagen, Michael Oxfeldt: Media researcher, DR, Master in Cross Media Communication, University of Copenhagen, Susanne Vangsgaard, special manager of questionnaires and health, Anne Mette Thorhauge, ass. Professor, PhD, Department of media, cognition and communication University of Copenhagen. And last but not least a very special acknowledgement to the staff of Odense Child Cohort for administrating the distribution of SCREENS-Q to the parents and to OPEN for collecting data.

Availability and translation of the SCREENS-Q

The SCREENS-Q is available in the original Danish version in Additional file 1 and in an English version in appendix 2. The English version is translated according to the ISO 17100 quality procedures for translation [39] by a professional company. For use in other countries than Denmark a cross cultural validation is recommended [40]. The use of SCREENS-Q should be with proper references. Both versions are free of charge for researchers and practitioners.

Funding

A European Research Council Starting Grant (no. 716657) was the major source of funding for the project. Center for Applied Health Science at University College Lillebaelt, Denmark funded the first author partially. The funders had no role in the design of the study design, nor did they play a role in the collection, management, analysis, and interpretation of the data from the study. Neither did they have a role in the writing of this paper as well as in the decision to submit the report for publication in BMC Public Health.

Author information

Affiliations

Authors

Contributions

Initial development of study and acquisition of funding: AG, HK, CW. Further development of study design and methods: CW, HK, AG, MGR, LGO, PLK and JP. Wrote the first draft for the manuscript: HK & CW. Contributed to the further development and writing of the manuscript: All authors. Approved the final version: All authors.

Corresponding author

Correspondence to Heidi Klakk.

Ethics declarations

Ethics approval and consent to participate

Ethical approval for the field test study with parents of 243 children was obtained from The Regional Committees on Health Research Ethics for Southern Denmark (Project-ID: S-20170033). Parents provided written informed consent before reporting on behalf of their children. The committee judged that ethical approval was not required for the pilot studies with ten parents and test-retest reliability evaluation with 35 parents (Project-ID: S-20170030).

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

The final version of the SCREENS-Q in Danish (original)

Additional file 2.

Translated version of the SCREENS-Q in English

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Klakk, H., Wester, C.T., Olesen, L.G. et al. The development of a questionnaire to assess leisure time screen-based media use and its proximal correlates in children (SCREENS-Q). BMC Public Health 20, 664 (2020). https://doi.org/10.1186/s12889-020-08810-6

Download citation

Keywords

  • Screen-media use
  • Children
  • Questionnaire
  • Correlates