Skip to main content

Assessment of the Chinese Resident Health Literacy Scale in a population-based sample in South China



A national health literacy scale was developed in China in 2012, though no studies have validated it. In this investigation, we assessed the reliability, construct validity, and measurement invariance of that scale.


A population-based sample of 3731 participants in Hunan Province was used to validate the Chinese Resident Health Literacy Scale based on item response theory and classical test theory (including split-half coefficient, Cronbach’s alpha, and confirmatory factor analysis). Measurement invariance was examined by differential item functioning.


The overall Cronbach’s alpha of the scale was 0.95 and Spearman-Brown coefficient 0.94. Confirmatory factor analysis showed that the test measured a unidimensional construct with three highly correlated factors. Highest discrimination was found among participants with limited to moderate health literacy. In all, 64 items were selected from the original scale based on factor loading, Pearson’s correlation coefficient, and discrimination and difficulty parameters in item response theory. Measurement invariance was significant but slight. According to the two-level linear model, health literacy was associated with education level, occupation, and income.


The 2012 national health literacy scale was validated, and 64 items were selected based on classical test theory and item response theory. The revised version of the scale has strong psychometric properties with minor measurement invariance.

Peer Review reports


The concept of health literacy was introduced in China in 2005 by the Chinese government through a manual entitled “Basic Knowledge and Skills of People’s Health Literacy” [1, 2]. That manual used the definition of health literacy of the World Health Organization: the cognitive and social skills which determine the motivation and ability of individuals to gain access to, understand and use information in ways which promote and maintain good health [3]. Under that definition, health literacy goes beyond the narrow concept of health education and individual behavior-oriented communication: it addresses the environmental, political, and social factors that determine health. The US Institute of Medicine defines health literacy in a similar way: health literacy is a set of skills that enable people to participate more fully in society instead of simply functional capabilities [4]. The ability to read and write is the foundation for health literacy, upon which a range of complementary skills can be built [5].

Based on the existing situation in China, health literacy was defined by its government as a set of capabilities in three domains in 2008: conceptual knowledge and attitudes; behavior and lifestyle; and health-related skills. The first nationwide survey on health literacy in China was conducted in 2008, and it focused on health knowledge [6]. The second national survey was conducted in 2012, with an emphasis on basic reading ability, arithmetic, and understanding medical information [7].

Internationally, the most commonly used measure of health literacy is the Rapid Estimate of Adult Literacy in Medicine and its shortened version; these assess an adult patient’s ability to read common medical terms and lay expressions for body parts and illnesses [810]. The Test of Functional Health Literacy in Adults and its shortened version are timed tests of reading comprehension of medical information [11, 12]. Other measures of health literacy in clinical settings include the following: the Medical Achievement Reading Test; the Newest Vital Sign; the Set of Brief Screening Questions; Functional, Communicative and Critical Health Literacy; the eHealth Literacy Scale; the Cancer Health Literacy Test; and the Diabetes Numeracy Test [1320]. These measures focus on a single dimension of health literacy, rather than identifying its multidimensional nature [21].

By contrast, some measures have expanded the scope of medical care-related literacy; they include the following: the Health Activity Literacy Scale; the Demographic Assessment of Health Literacy; the 2003 National Assessment of Adult Literacy; the Adult Literacy and Life Skills Survey; and the Health Literacy Assessment Using Talking Touchscreen Technology [2227]. These scales and questionnaires are more comprehensive because they involve different health-related competencies. However, they are considered proxy measures owing to the lack of an explicit definition of the concept of health literacy [28].

In China, health literacy in clinical settings has been measured using translated versions of scales used overseas as well as original Chinese scales among certain populations [2938]. For example, health literacy among older adults has been measured using the Chinese version of the Rapid Estimate of Adult Literacy in Medicine [33]; the translated version of the Test of Functional Health Literacy in Adults was employed to measure health literacy among adolescents aged 12–16 years in Nanning; the eHealth Literacy Scale was translated and used on a sample of senior high school students [34]; the Chinese version of the Diabetes Numeracy Test was used in a cluster-randomized trial in patients with diabetes [35]; and a three-question measure of health literacy derived from a systematic review was applied among cataract patients [36, 39]. However, all these studies investigated cross-sectional health literacy without evaluating the instruments employed [4042]. In the present study, we assessed the reliability and construct validity of the Chinese Resident Health Literacy Scale based on item response theory (IRT) and classical test theory using a population-based sample from Hunan Province in 2012. We also examined the association between health literacy and sociodemographic factors.



The participants were residents aged 15 to 69 years who had lived in the sampled regions for more than 6 of the previous 12 months. Such individuals as patients, students, military personnel, and prisoners resident in hospitals, school dormitories, nursing homes, military bases, and prisons were excluded from the survey.

We used a population-based stratified sampling frame, as shown in Fig. 1. The sampling strata included 13 cities or counties in Hunan Province, three streets or towns in each city or county, and two communities or villages (where the number of households exceeded 750) in each street or town. If there were fewer than 750 households within a community or village, the neighboring units were combined until that total was met. In each household, information regarding all family and non-relative members (e.g., hired nannies) aged 15–69 years who had been living there for more than 6 of the previous 12 months was recorded including gender (male or female) and age (elder to younger). One member in each house was selected for the survey by means of a Kish grid [43]. Unselected members were not allowed to complete the survey as a substitution.

Fig. 1

Study sampling process

The research protocol was reviewed and approved by the Medical Ethic Committee of the National Health and Family Planning Commission of China. All participants who agreed to participate in the study signed an informed consent form at the beginning of the survey.

Study design

The Chinese Resident Health Literacy Scale was developed based on a manual published by the Chinese Ministry of Health in 2008—“Basic Knowledge and Skills of People’s Health Literacy” (trial edition) [1]. The scale was designed by experts in public health, health education and promotion, and clinical medicine using the Delphi method. Details of the development procedure have been described in a previous paper [44]. The scale contains 80 items and three dimensions: (1) knowledge and attitudes; (2) behavior and lifestyle; and (3) health-related skills. The questions cover six aspects: scientific views of health; infectious diseases; chronic diseases; safety and first aid; medical care; and health information. As indicated in Table 1, there are four types of questions in the scale: true-or-false; single-answer (only one correct answer in multiple-choice questions); multiple-answer (more than one correct answer in multiple-choice questions); and situation questions. With multiple-answer questions, a correct response had to contain all the correct answers and no wrong ones. Situation questions were given following a paragraph of instruction or medical information.

Table 1 Examples of items

Before the field study, a survey team was established in each of the 13 cities or counties; the team comprised a principal, a coordinator, four to six investigators, a quality controller, and a data manager. All these team members received training for the sampling method, research tools, and quality control. A simulated survey was conducted during the training, and the investigators’ eligibility was assessed before performing the field survey.

Written informed consent was obtained from all participants before the survey. The scale was self-administered. However, if a participant was unable to complete the scale owing to impaired vision or other such reasons, an interview was used as an alternative. In that situation, the investigators would complete the questions in a neutral fashion on behalf of the participants.

Statistical analyses

Because repeated measures were not used, test-retest reliability was not determined. The split-half coefficient and Cronbach’s alpha were estimated before and after the item-selection procedure.

IRT was used to evaluate the precision of the measurements. IRT is a family of associated mathematical models that relate latent traits (ability) to the probability of responses to items in an assessment, and it has been widely used in psychometrics and health assessment [45]. It specifies a nonlinear relationship between binary, ordinal, or categorical responses and the latent trait (health literacy in this case). Compared with classical test theory approaches, the advantages of IRT include the following: near-equal interval measurement; representation of respondents and items on the same scale; and independence of person estimates from the particular set of items used for estimation [46].

We applied a two-parameter logistic IRT model for dichotomous responses. The two-parameter logistic model includes a difficulty parameter and discrimination parameter for each item. The difficulty parameter is the point on the ability scale that corresponds to a probability of a correct response of 0.5; the discrimination parameter estimates how well an item can differentiate among respondents with different levels of ability. Because the “I don’t know” choice was included for all questions, guessing parameters were not considered. Items with a discrimination parameter of 0.5 to 2.0 and a difficulty parameter corresponding to a certain region of the ability scale (−3.0 to 3.0) provide the most information [45, 47]. Parameters were estimated using a marginal maximum-likelihood method. The IRT model was recalibrated after the item-selection procedure. Measurement invariance of the scale among the different subgroups (by gender and race) was estimated using differential item functioning in the IRT model.

Pearson’s correlation coefficient was determined. An eligible items had to be significantly and at least moderately (0.4 to 0.7) correlated to the total score of its domain; hence, the correlation coefficient between them had to be above 0.4 [48]. The construct validity was assessed by confirmatory factor analysis (CFA). An assumed structure of the scale (three dimensions) was tested using a structural equation model. Since the items were binary measures, the unweighted least-squares method was employed for parameter estimation in the structural equation model. The chi-square value, goodness-of-fit index, root of the mean square residual, and parsimony goodness-of-fit index were used to assess the model fit. Several studies have recommended that the factor loading should be above 0.4 [4951].

Items that met two or more of the following criteria were removed: (1) discrimination parameter <0.5 or >2.0; (2) difficulty parameter < −3.0 or >3.0; (3) factor loading <0.4; and (4) Pearson’s correlation coefficient <0.4. In addition, items with strong discrimination (≥1.0) were selected to form a short version of the scale.

The demographic variables were described, and raw scores among the different subgroups were compared using analysis of variance. After item selection, the association between health literacy scores and demographic variables was tested by means of a multilevel linear model.

The IRT calibrations were conducted using PARSCALE 4.1 (Scientific Software International Inc., Lincolnwood, USA). CFA was performed in AMOS 17.0 (Arbuckle JL and SPSS Inc., Chicago, USA). Multilevel model estimation was carried out with MLwiN 2.1 (Rasbash J, Charlton C, Browne WJ, Healy M, and Cameron B, Centre for Multilevel Modelling, University of Bristol, UK). Other analyses were conducted using SAS 9.2 (SAS Institute Inc., Cary, USA). The significance level was 0.05 for all statistical tests.


In all, 3900 participants were sampled, and 3731 (95.7 %) completed the survey without apparent logical errors or missing items. As indicated in Table 2, there were significant differences in the health literacy scores among the subgroups of age, education level, occupation, annual per capita income, and residence (P <0.05), but not among the subgroups of gender and race. The proportion of correct responses to the 80 items varied from 10.8 to 96.7 % (Table 2).

Table 2 Sociodemographic characteristics of the participants and association with health literacy scores

The Spearman–Brown split-half coefficient was 0.94. The overall Cronbach’s alpha was 0.95; Cronbach’s alpha of the three dimensions was as follows: 0.90 (knowledge and attitude, 38 items); 0.83 (behavior and lifestyle, 22 items); and 0.85 (skills, 20 items). The two-parameter logistic model fitted the data well (P >0.05). The difficulty and discrimination parameters from the IRT model appear in Table 3. Most items exhibited good discriminative power and moderate difficulty. As shown in Fig. 2, the test information reached a peak when the participants’ ability was between −1 and 0, which indicates that the measurement was most discriminative among participants with limited to medium-level abilities in health literacy.

Table 3 Evaluation of items based on item response theory and confirmatory factor analysis
Fig. 2

Test information and participant ability. Ability signifies health literacy estimated using the maximum-likelihood method. Ability in the item response theory (IRT) model practically (though not exclusively) ranged from −3 to +3. The test information reached a peak when the ability was between −1 and 0; this indicates that the measurement exhibited highest discriminative power among participants with limited and under-average ability with respect to health literacy

With the CFA results, the three-factor model showed slightly better fit than the one-factor model. Correlations among the three factors (knowledge and attitudes; behavior and lifestyle; skills) were 0.96–0.98, which indicates good evidence for unidimensionality, i.e., the dominant dimension of health literacy. Factor loading and the correlation coefficient between items and dimensional scores are presented in Table 3.

In all, 16 items were removed from the scale according to the criteria of item selection; 10 of them were true-or-false questions, which showed poor discriminative power and small factor loading. Sixty-four items were selected according to classical and modern test theory standards. The Spearman-Brown coefficient was 0.94. The overall Cronbach’s alpha was 0.95; Cronbach’s alpha of the three dimensions was as follows: 0.90 (knowledge and attitude, 30 items); 0.83 (behavior and lifestyle, 16 items); and 0.86 (skills, 18 items). Goodness-of-fit of the CFA and the IRT models improved slightly compared with the original scale. Factor loading, difficulty parameters, and discrimination parameters of all the items met the criteria.

A shorter version of the scale, comprising 19 items with discrimination parameters ≥1.0, was also created. The shorter version consisted of eight items in the knowledge and attitude dimension, five items in the behavior and lifestyle dimension, and six items in the health-related skill dimension. The overall Cronbach’s alpha was 0.88; Cronbach’s alpha of the three dimensions was 0.76, 0.64, and 0.77 respectively. The split-half coefficient was 0.87. The correlation coefficients and factor loadings of all the items were above 0.4 (mostly >0.5), and the discrimination parameters of all the items were 0.5–2.0 (mostly 1.0–2.0).

Differential item functioning in the IRT model was used to examine measurement invariance. The chi-square tests showed significant measurement invariance in both gender and race (P <0.05); however, the slope and threshold parameters were very close between male and female as well as between urban and rural groups.

The association between health literacy (revised scale) and demographic variables was explored using a two-level model because intracluster correlation was identified at the level of cities. As indicated in Table 4, education level, occupation, and income were associated with health literacy. Participants with higher socioeconomic status (higher education level and greater income) were more likely to have adequate health literacy. The intracluster correlation coefficient at the city level was 34.5 %.

Table 4 Association between health literacy and sociodemographic variables based on a two-level linear model


To validate the scale used in the 2012 National Health Literacy Survey, we performed this study using a population-based sample in Hunan Province. Classical test theory (Cronbach’s alpha, split-half coefficient, and factor analysis) and modern test theory (IRT) were used in validating the scale. We found that the 2012 scale of health literacy meets psychometric standards. The overall Cronbach’s alpha was 0.95. The assumption that the scale measures a unidimensional construct was supported by the three-factor model fit being approximately that of the one-factor model fit and the three factors (knowledge and attitudes; behavior and lifestyle; skills) being highly correlated. Among the 80 items tested, 16 performed poorly and were removed. The remaining 64 items yielded a reliable estimate of health literacy, especially among participants with moderate and limited health literacy. The short version of the scale, which comprises 19 items with discrimination parameters ≥1.0, did not meet the standards for individual measurement (reliability ≥0.9). Nevertheless, the short version may still be effective for group comparisons [52].

In IRT, an item is useful only when it has good discrimination and its difficulty corresponds to a certain range in the ability scale: questions that are too hard or too easy provide little information [53]. However, if the discrimination is too high (i.e., greater than 2.5, as seen in clinical and psychological studies), the measured construct is often conceptually narrow. We limited the discrimination parameters to 0.5–2.0 because health literacy is a relatively broad concept. In this study, we identified items with inappropriate discrimination and difficulty. Most of them also had low factor loadings and correlation to the dimension score. However, the test used in the present study is time consuming. It usually took 30 min for an adult to complete the test; it took even longer for participants with limited literacy. Thus, in the future, it will be necessary to develop computerized adaptive testing and provide participants with short, tailored tests that have scores comparable to those of fixed-length tests.

Differential item functioning showed significant measurement invariance within both gender and race; however, the slope and threshold parameters were extremely close between the male and female as well as between the urban and rural groups. We observed no large differences between gender and race groups. The sample size in our study was sufficiently large to detect such slight differences. Thus, our results suggest that the Chinese Resident Health Literacy Scale may be efficiently applied for Chinese subjects of different genders and races for comparable scores.

The demographic factors associated with health literacy included education level, occupation, and annual income. Participants with higher education and better economic status were more likely to have adequate health literacy. Gender, age, race, and type of residence were found to be insignificant in the regression. The multilevel model identified an obvious intracluster correlation at the city level (primary unit in the sampling frame), with an intracluster correlation coefficient of 34.5 %. Health literacy is the outcome of health promotion, and both health literacy and socioeconomic factors are determinants of health. However, the potential of health education as a tool for promoting the social determinants of health has been neglected [54]. Health education should not focus only on changing personal lifestyles and improving compliance with disease management, but also on raising awareness of the social determinants of health [5].

Some limitations of this study deserve mention. First, we did not assess the content validity since the scale was initially developed by an expert panel from the Ministry of Health. Second, we did not perform repeated measures during the field study. Thus, the test-retest reliability was not determined. Third, as noted above, the test is time consuming: it usually took 30 min for an adult to complete and even longer for participants with limited literacy or other conditions.

Despite these limitations, this study has a number of implications. First, the original scale was found to be appropriate in terms of reliability and validity. We removed 16 items according to factor analysis and IRT, and the scores of the 64-item scale correlated highly with the scores of the original scale. Accordingly, the main conclusions of the 2012 National Health Literacy Survey were unaffected by validation of the scale it employed. Second, a shortened 19-item version was created because applying the original scale was very time consuming. The 19-item version was found to be slightly inferior to the original scale in terms of reliability (Cronbach’s alpha decreased from 0.95 to 0.88); however, it would still be effective for group comparisons and population studies. Third, the instruments used in the National Health Literacy Survey in 2008 and 2012 were different. Therefore, a direct comparison based on raw scores would be inappropriate. In the present study, IRT provided an opportunity for longitudinal comparison.


We evaluated and revised the Chinese Resident Health Literacy Scale based on IRT and classical test theory using a population-based sample in Hunan, China. The revised 64-item scale was found to have strong psychometric properties and be free of obvious measurement invariance within the race and gender groups employed in this study. This is the first investigation to evaluate and revise the instrument used in the 2012 National Health Literacy Survey in China. The findings of this study support use of the new instrument in research into health literacy in public health settings, and this investigation offers useful implications for future studies.



Item response theory


Confirmatory factor analysis


  1. 1.

    Chinese Ministry of Health. 66 tips of health: Chinese resident health literacy manual. Beijing: People’s Medical Publishing House; 2008.

    Google Scholar 

  2. 2.

    Li XH. Brief introduction on identification and dissemination of the Basic Knowledge and Skill of People’s Health Literacy by Chinese government. Chinese J Health Educ. 2008;24(5):385–8.

    CAS  Google Scholar 

  3. 3.

    World Health Organization. Health promotion glossary. Geneva: World Health Organization; 1998.

    Google Scholar 

  4. 4.

    Institute of Medicine. Health literacy: a prescription to end confusion. Washington DC: National Academies Press; 2004.

    Google Scholar 

  5. 5.

    Nutbeam D. The evolving concept of health literacy. Soc Sci Med. 2008;67(12):2072–8.

    Article  PubMed  Google Scholar 

  6. 6.

    Wang P, Mao Q, Tao M, Tian X, Li Y, Qian L, et al. Survey on the status of health literacy of Chinese residents in 2008. Chin J Health Educ. 2010;26(4):243–6.

    Google Scholar 

  7. 7.

    Li Y. Introduction of 2012 Chinese residents health literacy monitoring program. Chin J Health Educ. 2014;30(6):563–5.

    CAS  Google Scholar 

  8. 8.

    Davis TC, Crouch MA, Long SW, Jackson RH, Bates P, George RB, et al. Rapid assessment of literacy levels of adult primary care patients. Fam Med. 1991;23(6):433–5.

    CAS  PubMed  Google Scholar 

  9. 9.

    Bass 3rd PF, Wilson JF, Griffith CH. A shortened instrument for literacy screening. J Gen Intern Med. 2003;18(12):1036–8.

    Article  PubMed  PubMed Central  Google Scholar 

  10. 10.

    Arozullah AM, Yarnold PR, Bennett CL, Soltysik RC, Wolf MS, Ferreira RM, et al. Development and validation of a short-form, rapid estimate of adult literacy in medicine. Med Care. 2007;45(11):1026–33.

    Article  PubMed  Google Scholar 

  11. 11.

    Parker RM, Baker DW, Williams MV, Nurss JR. The test of functional health literacy in adults: a new instrument for measuring patients’ literacy skills. J Gen Intern Med. 1995;10(10):537–41.

    CAS  Article  PubMed  Google Scholar 

  12. 12.

    Baker DW, Williams MV, Parker RM, Gazmararian JA, Nurss J. Development of a brief test to measure functional health literacy. Patient Educ Couns. 1999;38(1):33–42.

    CAS  Article  PubMed  Google Scholar 

  13. 13.

    Weiss BD, Mays MZ, Martz W, Castro KM, DeWalt DA, Pignone MP, et al. Quick assessment of literacy in primary care: the newest vital sign. Ann Fam Med. 2005;3(6):514–22.

    Article  PubMed  PubMed Central  Google Scholar 

  14. 14.

    Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med. 2004;36(8):588–94.

    PubMed  Google Scholar 

  15. 15.

    Ishikawa H, Takeuchi T, Yano E. Measuring functional, communicative, and critical health literacy among diabetic patients. Diabetes Care. 2008;31(5):874–9.

    Article  PubMed  Google Scholar 

  16. 16.

    Norman CD, Skinner HA. eHEALS: The eHealth Literacy Scale. J Med Internet Res. 2006;8(4), e27.

    Article  PubMed  PubMed Central  Google Scholar 

  17. 17.

    Dumenci L, Matsuyama R, Riddle DL, Cartwright LA, Perera RA, Chung H, et al. Measurement of cancer health literacy and identification of patients with limited cancer health literacy. J Health Commun. 2014;19 Suppl 2:205–24.

    Article  PubMed  PubMed Central  Google Scholar 

  18. 18.

    Ishikawa H, Nomura K, Sato M, Yano E. Developing a measure of communicative and critical health literacy: a pilot study of Japanese office workers. Health Promot Int. 2008;23(3):269–74.

    Article  PubMed  Google Scholar 

  19. 19.

    Huizinga MM, Elasy TA, Wallston KA, Cavanaugh K, Davis D, Gregory RP, et al. Development and validation of the Diabetes Numeracy Test (DNT). BMC Health Serv Res. 2008;8:96.

    Article  PubMed  PubMed Central  Google Scholar 

  20. 20.

    Hanson-Divers EC. Developing a medical achievement reading test to evaluate patient literacy skills: a preliminary study. J Health Care Poor Underserved. 1997;8(1):56–69.

    CAS  Article  PubMed  Google Scholar 

  21. 21.

    Malloy-Weir LJ, Charles C, Gafni A, Entwistle VA. Empirical relationships between health literacy and treatment decision making: a scoping review of the literature. Patient Educ Couns. 2015;98(3):296–309.

    Article  PubMed  Google Scholar 

  22. 22.

    Hanchate AD, Ash AS, Gazmararian JA, Wolf MS, Paasche-Orlow MK. The Demographic Assessment for Health Literacy (DAHL): a new tool for estimating associations between health literacy and outcomes in national surveys. J Gen Intern Med. 2008;23(10):1561–6.

    Article  PubMed  PubMed Central  Google Scholar 

  23. 23.

    Kutner M, Greenberg E, Jin Y, Paulsen C. The health literacy of America’s adults: results from the 2003 National Assessment of Adult Literacy. Washington, DC: National Center for Education Statistics; 2006.

    Google Scholar 

  24. 24.

    Rudd RE. Health literacy skills of U.S. adults. Am J Health Behav. 2007;31 Suppl 1:S8–S18.

    Article  PubMed  Google Scholar 

  25. 25.

    Canadian Council on Learning. Health literacy in Canada: initial results from the International Adult Literacy and Skills Survey 2007. Ottawa: Canadian Council on Learning; 2007.

    Google Scholar 

  26. 26.

    Australian Bureau of Statistics. Health literacy, Australia 2006. Canberra: Australian Bureau of Statistics; 2008.

    Google Scholar 

  27. 27.

    Hahn EA, Choi SW, Griffith JW, Yost KJ, Baker DW. Health literacy assessment using talking touchscreen technology (Health LiTT): a new item response theory-based measure of health literacy. J Health Commun. 2011;16 Suppl 3:150–62.

    Article  PubMed  PubMed Central  Google Scholar 

  28. 28.

    Jordan JE, Osborne RH, Buchbinder R. Critical appraisal of health literacy indices revealed variable underlying constructs, narrow content and psychometric weaknesses. J Clin Epidemiol. 2011;64(4):366–79.

    Article  PubMed  Google Scholar 

  29. 29.

    Leung AY, Cheung MK, Chi I. Supplementing vitamin D through sunlight: associating health literacy with sunlight exposure behavior. Arch Gerontol Geriatr. 2015;60(1):134–41.

    CAS  Article  PubMed  Google Scholar 

  30. 30.

    Leung AY, Lou VW, Cheung MK, Chan SS, Chi I. Development and validation of Chinese Health Literacy Scale for Diabetes. J Clin Nurs. 2013;22(15–16):2090–9.

    Article  PubMed  Google Scholar 

  31. 31.

    Wang J, He Y, Jiang Q, Cai J, Wang W, Zeng Q, et al. Mental health literacy among residents in Shanghai. Shanghai Arch Psychiatry. 2013;25(4):224–35.

    PubMed  PubMed Central  Google Scholar 

  32. 32.

    Lam LT, Yang L. Is low health literacy associated with overweight and obesity in adolescents: an epidemiology study in a 12-16 years old population, Nanning, China, 2012. Arch Public Health. 2014;72(1):11.

    Article  PubMed  PubMed Central  Google Scholar 

  33. 33.

    Simon MA, Li Y, Dong X. Levels of health literacy in a community-dwelling population of Chinese older adults. J Gerontol A Biol Sci Med Sci. 2014;69 Suppl 2:S54–60.

    Article  PubMed  PubMed Central  Google Scholar 

  34. 34.

    Guo SJ, Yu XM, Sun YY, Nie D, Li XM, Wang L. Adaptation and evaluation of Chinese version of eHEALS and its usage among senior high school students. Chin J Health Educ. 2013;29(2):106–8.

    Google Scholar 

  35. 35.

    Xu WH, Rothman RL, Li R, Chen Y, Xia Q, Fang H, et al. Improved self-management skills in Chinese diabetes patients through a comprehensive health literacy strategy: study protocol of a cluster randomized controlled trial. Trials. 2014;15:498.

    Article  PubMed  PubMed Central  Google Scholar 

  36. 36.

    Lin X, Wang M, Zuo Y, Li M, Lin X, Zhu S, et al. Health literacy, computer skills and quality of patient-physician communication in Chinese patients with cataract. PLoS One. 2014;9(9), e107615.

    Article  PubMed  PubMed Central  Google Scholar 

  37. 37.

    Sun X, Yang S, Fisher EB, Shi Y, Wang Y, Zeng Q, et al. Relationships of health literacy, health behavior, and health status regarding infectious respiratory diseases: application of a skill-based measure. J Health Commun. 2014;19 Suppl 2:173–89.

    Article  PubMed  Google Scholar 

  38. 38.

    Sun X, Shi Y, Zeng Q, Wang Y, Du W, Wei N, et al. Determinants of health literacy and health behavior regarding infectious respiratory diseases: a pathway model. BMC Public Health. 2013;13:261.

    Article  PubMed  PubMed Central  Google Scholar 

  39. 39.

    Powers BJ, Trinh JV, Bosworth HB. Can this patient read and understand written health information. JAMA. 2010;304(1):76–84.

    CAS  Article  PubMed  Google Scholar 

  40. 40.

    Ye XH, Yang Y, Gao YH, Chen SD, Xu Y. Status and determinants of health literacy among adolescents in Guangdong, China. Asian Pac J Cancer Prev. 2014;15(20):8735–40.

    Article  PubMed  Google Scholar 

  41. 41.

    Wang C, Li H, Li L, Xu D, Kane RL, Meng Q. Health literacy and ethnic disparities in health-related quality of life among rural women: results from a Chinese poor minority area. Health Qual Life Outcomes. 2013;11:153.

    Article  PubMed  PubMed Central  Google Scholar 

  42. 42.

    Wang X, Guo H, Wang L, Li X, Huang M, Liu Z, et al. Investigation of Residents’ Health Literacy Status and Its Risk Factors in Jiangsu Province of China. Asia Pac J Public Health. 2015;27(2):NP2764–2772.

    Article  PubMed  Google Scholar 

  43. 43.

    Kish L. A procedure for objective respondent selection within the household. J Am Statist Assoc. 1949;44(247):380–7.

    Article  Google Scholar 

  44. 44.

    Xiao L, Cheng YL, Ma Y, Chen GY, Hu JF, Li YH, et al. A study on applying Delphi method for screening evaluation indexes of health literacy of China adults. Chin J Health Educ. 2008;24(2):81–4.

    Google Scholar 

  45. 45.

    Reise SP, Waller NG. Item response theory and clinical measurement. Annu Rev Clin Psychol. 2009;5:27–48.

    Article  PubMed  Google Scholar 

  46. 46.

    Hambleton RK, Swaminathan H. Item response theory: principles and applications. Boston, Massachusetts: Kluwer-Nijhoff; 1985.

    Google Scholar 

  47. 47.

    Baker FB. The basics of item response theory. 2nd ed. The United States of America: ERIC Clearinghosue on Assessment and Evaluation; 2001.

    Google Scholar 

  48. 48.

    Richard T. Interpretation of the correlation coefficient: a basic review. J Diagn Med Sonog. 1990;6(1):35–9.

    Article  Google Scholar 

  49. 49.

    Gerbing DW, Anderson JC. An updated paradigm for scale development incorporating unidemensionality and its assessment. J Marketing Res. 1988;25(2):186–92.

    Article  Google Scholar 

  50. 50.

    Gorsuch RL. Exploratory factor analysis: its role in item analysis. J Pers Assess. 1997;68(3):532–60.

    CAS  Article  PubMed  Google Scholar 

  51. 51.

    Velicer WF, Fava JL. Effects of variable and subject sampling on factor pattern recovery. Psychol Methods. 1998;3(2):231–51.

    Article  Google Scholar 

  52. 52.

    Nunnally JC, Bernstein IH. Psychometric Theory. New York: McGraw-Hill, Inc; 1994.

    Google Scholar 

  53. 53.

    Thomas ML. The value of item response theory in clinical assessment: a review. Assessment. 2011;18(3):291–307.

    Article  PubMed  Google Scholar 

  54. 54.

    Nutbeam D. Health literacy as a public health goal: a challenge for contemporary health education and communication strategies into the 21st century. Health Promot Int. 2000;15(3):259–67.

    Article  Google Scholar 

Download references


This work was supported by the Natural Science Foundation of China (81402770).

Author information



Corresponding author

Correspondence to Ming Hu.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

MS performed the statistical analysis and drafted the manuscript. MH participated in the design of the study and revision of the paper. SL and YC participated in data collection. ZS participated in the design of the study and coordination. All the authors read and approved the final manuscript.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Shen, M., Hu, M., Liu, S. et al. Assessment of the Chinese Resident Health Literacy Scale in a population-based sample in South China. BMC Public Health 15, 637 (2015).

Download citation


  • Health literacy
  • Item response theory
  • Confirmatory factor analysis
  • Measurement invariance