Print this page Email this page Users Online: 964 | Click here to view old website
Home About us Editorial Board Search Current Issue Archives Submit Article Author Instructions Contact Us Login 


 
 Table of Contents  
GENERAL ARTICLE
Year : 2017  |  Volume : 30  |  Issue : 1  |  Page : 84-88

Assessing reading levels of health information: uses and limitations of flesch formula


Faculty of Health Sciences, School of Rehabilitation Sciences, McMaster University, Hamilton, ON, Canada

Date of Web Publication13-Jul-2017

Correspondence Address:
Pranay Jindal
1280 Main Street West, McMaster University, Hamilton, ON
Canada
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/1357-6283.210517

  Abstract 

Background: Written health information is commonly used by health-care professionals (HCPs) to inform and assess patients in clinical practice. With growing self-management of many health conditions and increased information seeking behavior among patients, there is a greater stress on HCPs and researchers to develop and implement readable and understandable health information. Readability formulas such as Flesch Reading Ease (FRE) and Flesch–Kincaid Reading Grade Level (FKRGL) are commonly used by researchers and HCPs to assess if health information is reading grade appropriate for patients. Purpose: In this article, we critically analyze the role and credibility of Flesch formula in assessing the reading level of written health information. Discussion: FRE and FKRGL assign a grade level by measuring semantic and syntactic difficulty. They serve as a simple tool that provides some information about the potential literacy difficulty of written health information. However, health information documents often involve complex medical words and may incorporate pictures and tables to improve the legibility. In their assessments, FRE and FKRGL do not take into account (1) document factors (layout, pictures and charts, color, font, spacing, legibility, and grammar), (2) person factors (education level, comprehension, health literacy, motivation, prior knowledge, information needs, anxiety levels), and (3) style of writing (cultural sensitivity, comprehensiveness, and appropriateness), and thus, inadequately assess reading level. New readability measures incorporate pictures and use complex algorithms to assess reading level but are only moderately used in health-care research and not in clinical practice. Future research needs to develop generic and disease-specific readability measures to evaluate comprehension of a written document based on individuals' literacy levels, cultural background, and knowledge of disease.

Keywords: Flesch Reading Ease, Flesch–Kincaid Reading Grade level, health literacy, readability formulas


How to cite this article:
Jindal P, MacDermid JC. Assessing reading levels of health information: uses and limitations of flesch formula. Educ Health 2017;30:84-8

How to cite this URL:
Jindal P, MacDermid JC. Assessing reading levels of health information: uses and limitations of flesch formula. Educ Health [serial online] 2017 [cited 2019 Nov 16];30:84-8. Available from: http://www.educationforhealth.net/text.asp?2017/30/1/84/210517


  Background Top


Patient education is an integral part of clinical practice. Allied health-care professionals (HCPs), physicians, and nurses commonly use written medical information materials for patient education. Many patients use the Internet for reading information about their health conditions and their treatment. The advantages of interactivity and anonymity [1] have made the Internet a prominent source of health information for both professionals and general public.[2],[3] The Internet has increased the volume of available text health information and consumers have increasingly gone online to search medical information.[2],[4],[5],[6],[7] Many health-care interventions, such as diabetes management, smoking cessation, and back pain rehabilitation, are now being delivered online for a variety of perceived benefits.[8] With the increasing push for self-management and home programs for many health conditions, there is a need for health information materials which can be easily read and understood by patients. However, there is a lack of evidence-based health information which can be accessed, read, and understood by the general public.[9]

Written health material is predominantly used in patient education and online health information. It is important for a patient to be able to read and understand the information for better engagement in his healthcare. Patient engagement is essential for meaningful health outcomes and to enhance patient satisfaction.[10] Researchers and clinicians also use written health information to assess patient recovery, satisfaction with care, and areas of improvement in services. Readability formulas are commonly used to evaluate the reading level of written health information which might include test protocols and instructions, home exercise and care instructions, information pamphlets, consent and assent forms, patient-reported outcome measures, and surveys. Easy to read and understand written document allows accurate reporting, active research participation, and enhanced clinical practice.

Despite increased use of the Internet by people and improved availability of online health information, many of these resources remain underutilized as they cannot be read and understood by common people.[11],[12],[13],[14] In the field of health-care research, many informed consent and patient reported outcome measures are above the level of reading ability of a common man [15],[16] which might yield inaccurate results. Readability is defined as “the sum total (including all the interactions) of all those elements within a given piece of printed material that affect the success a group of readers have with it. The success is the extent to which they understand it, read it at an optimal speed, and find it interesting.”[17]

Many readability formulas such as Flesch, Dale–Chall, and Gunning Fog Index exist for checking the readability of a text in English.[18] A few readability formulas are also available for checking the readability of non-English text, for example, READ-IT [19] (Italian), Spaulding, modified Fry graph, and Crawford formula [20] (Spanish), and Lix Readability Formula [21] (Swedish). Most reading formulas give scores by doing mathematical calculations based on the number of words per se ntence, mean word length, and number of syllables per word.[22] Due to the complexity involved in assessing reading ease, many researchers and health-care providers use multiple readability measures to more accurately evaluate the readability of a given written document.[23] The Flesch formula is most commonly used to assess the readability of written health-care information materials.[18],[24],[25],[26] In this article, we will critically analyze the credibility and role of Flesch formula in assessing the readability of written health information.

The Flesch formula

Rudolph Flesch developed the Flesch Reading Ease (FRE) formula in 1948. Flesch Reading Grade Level (FRGL) formula was built upon in FRE by Kincaid et al. in 197 5 for the US Navy to give a grade level to written material.[25] It is commonly referred as Flesch–Kincaid Reading Grade Level (FKRGL). Both FRE and FKRGL calculate the readability based on two variables: average sentence length (based on the number of words) and average word length (based on the number of syllables).

The validity of Flesch Reading Ease and Flesch–Kincaid Reading Grade Level

Both the FRE formula and FKRGL are valid for measuring the readability of a written text between US Grade 5 and college level.[24],[25] The FRE scores have been validated against 1950 McCall-Crabbs Standard Test in reading lessons,[27] and FRE scores correlate 0.6 with 1950 McCall-Crabbs Standard Test.[28] The FRE scores also correlate highly with Fry (0.96) and Simple Measure of Gobbledygook (0.95) readability formulas.[29] Since comprehension is an important aspect to understand the written findings, the FKRGL has also been validated against the cloze comprehension tests.[25],[30] Since FKRGL is validated against the cloze comprehension tests and they directly give a grade level (as opposed to an estimate in FRE), FKRGL is more commonly used in daily practice.[26]

Scoring

Both FRE and FKRGL can be calculated manually and electronically. The score derived from FRE ranges from 0 (unreadable) to 100 (very easy to read). FKRGL also gives a score that corresponds to the US grade level. For example, a FKRGL of 8.5 means that the text should be understood by people achieved US Grade 8. To reach people with low levels of literacy, it is suggested that written health-care information materials are to be targeted at Grade level 8 or less in the US [31],[32] and Grade level 12 in the UK.[31] Interpretations of FRE scores [28] and estimated grade level are in [Table 1].
Table 1: Interpretation of Flesch reading ease scores

Click here to view


Advantages and limitations

As both FRE and FKRGL are readily available within the MS Office suite, they are the most preferred readability formulas.[33] FRE and FKRGL exclude subtitles, captions, and headings.[34] Furthermore, the FRE and FKRGL scores cannot be calculated on tables, charts, and graphics.[35] MS Word recognizes each period as the end of a sentence. Thus, abbreviations, numbers with decimals, and bullet headings might reduce the reading level thus underestimating the grade level.[34]

Critical analysis

To understand written health-care information, patients need to go beyond reading and actually comprehend the information. Comprehension depends on various factors such as (a) text layout (title, font, colors, tables, graphics, spacing, and grammar),[31],[36] (b) patient motivation and prior knowledge of the subject which helps them to derive meaning from the written text,[37] and (c) patient anxiety levels.[38] FRE and FKRGL do not take prior knowledge, motivation, layout, grammar, and graphics into consideration; and rely only on word and sentence length.[34],[39]

Despite common use of FRE and FKRGL, its validation and scoring algorithm are still debated. Some authors suggest that there has been an inadequate validation of the FRE and FKRGL.[40] It is noteworthy that McCall-Crabbs Standard Test was never intended to be used as a criterion for reading formula.[40] Both medical and lay terminologies and vocabulary have changed greatly over the past 60 years, and 1950 standards of McCall-Crabbs test are outdated now.[27] Importantly, passages used while developing McCall-Crabbs test for comprehension were too short and simple [40] and do not have the necessary capacity to accurately assess the reading level of a document consisting of complex medical words. Due to the built-in availability within MS Office, the FRE and FKRGL have become a readily available option but provide lower estimates of readability as compared to other reading formulas.[41]

FRE and FKRGL scores are based on word or sentence length, thus including a complex medical word might increase the reading level of the document. However, patients with chronic health conditions may be familiar with the medical terms relating to their health condition. Using a simpler or uncommon word may decrease the grade level but might fail to capture the interest of patients, might undermine their confidence in its credibility, or might not provide the needed in-depth information.[39],[42] Furthermore, in a health information document, keywords are often repeated to reinforce learning, which will lead to inappropriately higher FRE and FKRGL scores for these texts.

Current options for readability formulae

Since the use of graphics influence readability, new readability measures such as Readability Assessment Instrument,[43] Ensuring Quality Information for Patients,[44] and Suitability Assessment of Materials [45] have been developed in the past decade to assess the effectiveness of diagrams and pictures with text. These measures can be used by health-care providers in conjunction with FRE and FKRGL to develop effective patient education materials. Often home programs and outcomes' measures use a combination of text, tables, and figures; these alternative readability measures may be more valid.

Researchers and clinicians can assess patient literacy and later combine readability formula findings with literacy assessment to develop audience-appropriate education materials. Patient literacy assessment tools such as Rapid Estimate of Adult Literacy in Medicine,[46] Test of Functional Health Literacy in Adults,[47] Brief Questions to Identify Patients With Inadequate health literacy,[48] Newest Vital Sign,[49] Health Activity Literacy Study,[50] and Wide Range Achievement Test-Revised [51],[52] can be used by clinicians and researchers to assess literacy levels of patients.

Recent scientific developments using computer linguistics have targeted the problems of outdated vocabulary and multiple meanings of the same word faced by FRE and FKRGL. New tools such as ATOS readability formula,[53],[54] Pearson Reading Maturity Metric,[55] the Coh-Metrix Text Easability Assessor,[56] and TextEvaluator [57] are online tools that constantly update their word vocabulary. These tools also use complex algorithms and measure semantic, structural, and vocabulary aspects to provide an accurate assessment of reading levels. These new tools have been used widely in classrooms by teachers to assess the reading level of books and passages in English; however, utilization and utility in health research are subject to further research.


  Discussion Top


Assessment of reading grade level of written documents is a complex issue. It is influenced by factors related to the document (layout, color, font, spacing, legibility, and grammar); person (education, comprehension, health literacy, motivation, prior knowledge, information needs, anxiety levels); and style of writing (cultural sensitivity, context, comprehensiveness, and appropriateness). Reading formulas are inadequate objective measures but are being used as quick indicators to assist writers in targeting information to the lay public. Sole reliance on FRE and FKRGL or other reading formulas to assess the grade level can be misleading. It is important for HCPs and researchers to move beyond assessing reading grade level–simply can the patient read the document – toward assessing patient's understanding of the document. Assessing understanding is a complex process and is influenced by many intertwined variables. To develop a written document that is engaging to the reader, focuses on the key information, and is technically correct requires consideration of multiple factors related to the document, the target audience, and the information being provided.

Future directions

Multiple reading formulas exist; however, there is a lack of culturally sensitive, disease specific, and context-based readability measures. Future research should focus on developing generic and disease-specific online readability measures that can assess the comprehension of a written document based on individuals' literacy levels, cultural background, cognitive ability, vocabulary, and knowledge of the disease. Future readability measures should also be able to assess reading level of a document containing pictures and charts. Developing and using a measure that is customized to literacy levels, cultural background, and the disease knowledge might also help solve the ongoing problem of health-care information being written in a grade level higher than recommended level.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

 
  References Top

1.
Cline RJ, Haynes KM. Consumer health information seeking on the internet: The state of the art. Health Educ Res 2001;16:671-92.  Back to cited text no. 1
    
2.
Baker L, Wagner TH, Singer S, Bundorf MK. Use of the internet and e-mail for health care information: Results from a national survey. JAMA 2003;289:2400-6.  Back to cited text no. 2
    
3.
Diaz JA, Griffith RA, Ng JJ, Reinert SE, Friedmann PD, Moulton AW. Patients' use of the Internet for medical information. J Gen Intern Med 2002;17:180-5.  Back to cited text no. 3
    
4.
Powell JA, Darvell M, Gray JA. The doctor, the patient and the world-wide web: How the internet is changing healthcare. J R Soc Med 2003;96:74-6.  Back to cited text no. 4
    
5.
Eysenbach G, Kohler CH. What is the prevalence of health-related searches on the World Wide Web? Qualitative and quantitative analysis of search engine queries on the internet. AMIA Annu Symp Proc 2003;225-9.  Back to cited text no. 5
    
6.
Spink A, Yang Y, Jansen J, Nykanen P, Lorence DP, Ozmutlu S, et al. Astudy of medical and health queries to web search engines. Health Info Libr J 2004;21:44-51.  Back to cited text no. 6
    
7.
Hamm MP, Chisholm A, Shulhan J, Milne A, Scott SD, Given LM, et al. Social media use among patients and caregivers: A scoping review. BMJ Open 2013;3. pii: E002819.  Back to cited text no. 7
    
8.
Griffiths F, Lindenmeyer A, Powell J, Lowe P, Thorogood M. Why are health care interventions delivered over the internet? A systematic review of the published literature. J Med Internet Res 2006;8:e10.  Back to cited text no. 8
    
9.
Jindal P, MacDermid J. Type and extent of knowledge translation resources published by peer-reviewed rehabilitation journals. Crit Rev Phys Rehabil Med 2015;27:105-22.  Back to cited text no. 9
    
10.
Coulter A. Patient engagement – What works? J Ambul Care Manage 2012;35:80-9.  Back to cited text no. 10
    
11.
Walsh TM, Volsko TA. Readability assessment of internet-based consumer health information. Respir Care 2008;53:1310-5.  Back to cited text no. 11
    
12.
Cotugna N, Vickery CE, Carpenter-Haefele KM. Evaluation of literacy level of patient education pages in health-related journals. J Community Health 2005;30:213-9.  Back to cited text no. 12
    
13.
McInnes N, Haglund BJ. Readability of online health information: Implications for health literacy. Inform Health Soc Care 2011;36:173-89.  Back to cited text no. 13
    
14.
Agarwal N, Hansberry DR, Sabourin V, Tomei KL, Prestigiacomo CJ. A comparative analysis of the quality of patient education materials from medical specialties. JAMA Intern Med 2013;173:1257-9.  Back to cited text no. 14
    
15.
Paz SH, Liu H, Fongwa MN, Morales LS, Hays RD. Readability estimates for commonly used health-related quality of life surveys. Qual Life Res 2009;18:889-900.  Back to cited text no. 15
    
16.
Paasche-Orlow MK, Taylor HA, Brancati FL. Readability standards for informed-consent forms as compared with actual readability. N Engl J Med 2003;348:721-6.  Back to cited text no. 16
    
17.
Dale E, Chall JS. The concept of readability. Natl Counc Teach Engl 1949;26:19-26.  Back to cited text no. 17
    
18.
Ley P, Florio T. The use of readability formulas in health care. Psychol Health Med 1996;1:7-28.  Back to cited text no. 18
    
19.
Read-It: Assessing Readability of Italian Texts with a View to Text Simplification. Proceedings of the Second Workshop on Speech and Language Processing for Assistive Technologies: Association for Computational Linguistics; 2011.  Back to cited text no. 19
    
20.
Parker I Richard, Hasbrouck JE, Weaver L, Spanish readability formulas for elementary-level texts: A validation study. Read Writ Q 2001;17:307-22.  Back to cited text no. 20
    
21.
Björnsson C. Lix Readability Formula: The Lasbarhetsindex Swedish Readability Formula; 2016. Available from: http://www.readabilityformulas.com/the-LIX-readability-formula.php [Last accessed on 2016 Aug 26].  Back to cited text no. 21
    
22.
Hansberry DR, Agarwal N, Baker SR. Health literacy and online educational resources: An opportunity to educate patients. AJR Am J Roentgenol 2015;204:111-6.  Back to cited text no. 22
    
23.
D'Alessandro DM, Kingsley P, Johnson-West J. The readability of pediatric patient education materials on the World Wide Web. Arch Pediatr Adolesc Med 2001;155:807-12.  Back to cited text no. 23
    
24.
Flesch R. A new readability yardstick. J Appl Psychol 1948;32:221-33.  Back to cited text no. 24
    
25.
Kincaid JP, Fishburne RP, Rogers RL, Chissom BS. Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel. Naval Technical Training, U. S. Naval Air Station; 1975. p. 1-48.  Back to cited text no. 25
    
26.
Wang LW, Miller MJ, Schmitt MR, Wen FK. Assessing readability formula differences with written health information materials: Application, results, and recommendations. Res Social Adm Pharm 2013;9:503-16.  Back to cited text no. 26
    
27.
Jacabson DM, Kirkland E, Selden WR. An examination of the McCall-Crabbs standard test lessons in reading. J Read 1978;22:224-30.  Back to cited text no. 27
    
28.
DuBay WH. The Principles of Readability. California: Impact Information; 2004.  Back to cited text no. 28
    
29.
Meade CD, Smith CF. Readability formulas: Cautions and criteria. Patient Educ Couns 1991;17:153-8.  Back to cited text no. 29
    
30.
Taylor WL. “Cloze procedure”: A new tool for measuring readability. Journal Q 1953;30:415.  Back to cited text no. 30
    
31.
Doak CC, Doak LG, Root JH. Teaching Patients with Low Literacy Skills. 2, Illustrated Edition. Philadelphia: J.B. Lippincott; 1996.  Back to cited text no. 31
    
32.
Kutner M, Greenberg E, Jin Y, Paulsen C. The Health Literacy of America's Adults: Results From the 2003 National Assessment of Adult Literacy. U.S. Department of Education; 2006. p. 1-60.  Back to cited text no. 32
    
33.
Stockmeyer NO. Using Microsoft word's readability program. Mich Bar J 2009;88:46.  Back to cited text no. 33
    
34.
Friedman DB, Hoffman-Goetz L. A systematic review of readability and comprehension instruments used for print and web-based cancer information. Health Educ Behav 2006;33:352-73.  Back to cited text no. 34
    
35.
Cherla DV, Sanghvi S, Choudhry OJ, Liu JK, Eloy JA. Readability assessment of internet-based patient education materials related to endoscopic sinus surgery. Laryngoscope 2012;122:1649-54.  Back to cited text no. 35
    
36.
Bernardini C, Ambrogi V, Fardella G, Perioli L, Grandolini G. How to improve the readability of the patient package leaflet: A survey on the use of colour, print size and layout. Pharmacol Res 2001;43:437-44.  Back to cited text no. 36
    
37.
Bailin A, Grafstein A. The linguistic assumptions underlying readability formulae: A critique. Lang Commun 2001;21:285-301.  Back to cited text no. 37
    
38.
Estey A, Musseau A, Keehn L. Patient's understanding of health information: A multihospital comparison. Patient Educ Couns 1994;24:73-8.  Back to cited text no. 38
    
39.
Pringle MB, Natesh BG, Konieczny KM. Patient information leaflet on mastoid surgery risks: Assessment of readability and patient understanding. J Laryngol Otol 2013;127:1078-83.  Back to cited text no. 39
    
40.
Stevens KC. Readability formulae and McCall-Crabbs standard test lessons in reading. Read Teach 1980;33:4:413-15.  Back to cited text no. 40
    
41.
Wilson M. Readability and patient education materials used for low-income populations. Clin Nurse Spec 2009;23:33-40.  Back to cited text no. 41
    
42.
Green GM, Olsen MS. Preferences for and Comprehension of Original and Readability-Adapted Materials. University of_ Illinois at Urbana-Champaign; 1986; Technical Report 393.  Back to cited text no. 42
    
43.
Singh J. Readability assessment instrument user's manual. 2nd ed. Midlothian, VA: ONE Publications; 2005.  Back to cited text no. 43
    
44.
Moult B, Franck LS, Brady H. Ensuring quality information for patients: Development and preliminary validation of a new instrument to improve the quality of written health care information. Health Expect 2004;7:165-75.  Back to cited text no. 44
    
45.
Suitability Assessment of Materials (SAM). American Public Health Association Annual Meeting; 1994.  Back to cited text no. 45
    
46.
Murphy PW, Davis TC, Long SW, Jackson RH, Decker BC. Rapid estimate of adult literacy in medicine (REALM): A quick reading test for patients. J Read 1993;37:124-30.  Back to cited text no. 46
    
47.
Parker RM, Baker DW, Williams MV, Nurss JR. The test of functional health literacy in adults: A new instrument for measuring patients' literacy skills. J Gen Intern Med 1995;10:537-41.  Back to cited text no. 47
    
48.
Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med 2004;36:588-94.  Back to cited text no. 48
    
49.
Weiss BD, Mays MZ, Martz W, Castro KM, DeWalt DA, Pignone MP, et al. Quick assessment of literacy in primary care: The newest vital sign. Ann Fam Med 2005;3:514-22.  Back to cited text no. 49
    
50.
Rudd RE. Health literacy skills of U.S. adults. Am J Health Behav 2007;31 Suppl 1:S8-18.  Back to cited text no. 50
    
51.
Robertson GJ. Wide-Range achievement test. Corsini Encyclopedia of Psychology. Wiley Online library; 2010. Vol. 1-2.  Back to cited text no. 51
    
52.
Reid N. Wide Range achievement test: 1984 revised edition. J Couns Dev 1986;64:538-9.  Back to cited text no. 52
    
53.
The ATOS Readability Formula for Books and How It Compares to Other Formulas; 2000. Available from: http://www.files.eric.ed.gov/fulltext/ED449468.pdf. [Last accessed on 2016 Aug 16].  Back to cited text no. 53
    
54.
Milone M. The Development of ATOS: The Renaissance Readability Formula; 2009. Available from: http://www.doc.renlearn.com/KMNet/R004250827GJ11C4.pdf. [Last accessed on 2016 Aug 16].  Back to cited text no. 54
    
55.
Reading Maturity Metric (RMM); 2013. Available from: http://www.pearsonassessments.com/automatedlanguageassessment/products/100000021/reading-maturity-metric-rmm.html#tab-details. [Last accessed on 2016 Aug 15].  Back to cited text no. 55
    
56.
McNamara DS, Graesser AC. Coh-Metrix: An Automated Tool for Theoretical and Applied Natural Language Processing. Applied Natural Language Processing and Content Analysis: Identification, Investigation, and Resolution. Hershey, PA: IGI Global; 2012.  Back to cited text no. 56
    
57.
Sheehan KM, Kostin I, Napolitano D, Flor M. The text evaluator tool. Elem Sch J 2014;115:184-209.  Back to cited text no. 57
    



 
 
    Tables

  [Table 1]


This article has been cited by
1 Assessment of quality of information available over the internet about vegan diet
Olivia Genevieve El Jassar,Isobel Nadia El Jassar,Evangelos I. Kritsotakis
Nutrition & Food Science. 2019;
[Pubmed] | [DOI]
2 eH ealth tool for patients with abdominal aortic aneurysm: development and initial evaluation
Olga Nilsson,Rebecka Hultgren,Anna Letterstål
Scandinavian Journal of Caring Sciences. 2019;
[Pubmed] | [DOI]
3 A Readability Analysis of Online Cardiovascular Disease-Related Health Education Materials
Varun Ayyaswami,Divya Padmanabhan,Manthan Patel,Arpan Vaikunth Prabhu,David R. Hansberry,Nitin Agarwal,Jared W. Magnani
HLRP: Health Literacy Research and Practice. 2019; 3(2): e75
[Pubmed] | [DOI]
4 Readability of endoscopy information leaflets: Implications for informed consent
Matthew C. Mason,James M.L. Williamson
International Journal of Clinical Practice. 2018; : e13099
[Pubmed] | [DOI]
5 What is the quality of drug safety information for patients: An analysis of REMS educational materials
Hilda W. Chan,Andrea M. Russell,Meredith Y. Smith
Pharmacoepidemiology and Drug Safety. 2018;
[Pubmed] | [DOI]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Background
Discussion
References
Article Tables

 Article Access Statistics
    Viewed2809    
    Printed30    
    Emailed0    
    PDF Downloaded321    
    Comments [Add]    
    Cited by others 5    

Recommend this journal