|Year : 2017 | Volume
| Issue : 1 | Page : 84-88
Assessing reading levels of health information: uses and limitations of flesch formula
Pranay Jindal, Joy C MacDermid
Faculty of Health Sciences, School of Rehabilitation Sciences, McMaster University, Hamilton, ON, Canada
|Date of Web Publication||13-Jul-2017|
1280 Main Street West, McMaster University, Hamilton, ON
Source of Support: None, Conflict of Interest: None
Background: Written health information is commonly used by health-care professionals (HCPs) to inform and assess patients in clinical practice. With growing self-management of many health conditions and increased information seeking behavior among patients, there is a greater stress on HCPs and researchers to develop and implement readable and understandable health information. Readability formulas such as Flesch Reading Ease (FRE) and Flesch–Kincaid Reading Grade Level (FKRGL) are commonly used by researchers and HCPs to assess if health information is reading grade appropriate for patients. Purpose: In this article, we critically analyze the role and credibility of Flesch formula in assessing the reading level of written health information. Discussion: FRE and FKRGL assign a grade level by measuring semantic and syntactic difficulty. They serve as a simple tool that provides some information about the potential literacy difficulty of written health information. However, health information documents often involve complex medical words and may incorporate pictures and tables to improve the legibility. In their assessments, FRE and FKRGL do not take into account (1) document factors (layout, pictures and charts, color, font, spacing, legibility, and grammar), (2) person factors (education level, comprehension, health literacy, motivation, prior knowledge, information needs, anxiety levels), and (3) style of writing (cultural sensitivity, comprehensiveness, and appropriateness), and thus, inadequately assess reading level. New readability measures incorporate pictures and use complex algorithms to assess reading level but are only moderately used in health-care research and not in clinical practice. Future research needs to develop generic and disease-specific readability measures to evaluate comprehension of a written document based on individuals' literacy levels, cultural background, and knowledge of disease.
Keywords: Flesch Reading Ease, Flesch–Kincaid Reading Grade level, health literacy, readability formulas
|How to cite this article:|
Jindal P, MacDermid JC. Assessing reading levels of health information: uses and limitations of flesch formula. Educ Health 2017;30:84-8
| Background|| |
Patient education is an integral part of clinical practice. Allied health-care professionals (HCPs), physicians, and nurses commonly use written medical information materials for patient education. Many patients use the Internet for reading information about their health conditions and their treatment. The advantages of interactivity and anonymity  have made the Internet a prominent source of health information for both professionals and general public., The Internet has increased the volume of available text health information and consumers have increasingly gone online to search medical information.,,,, Many health-care interventions, such as diabetes management, smoking cessation, and back pain rehabilitation, are now being delivered online for a variety of perceived benefits. With the increasing push for self-management and home programs for many health conditions, there is a need for health information materials which can be easily read and understood by patients. However, there is a lack of evidence-based health information which can be accessed, read, and understood by the general public.
Written health material is predominantly used in patient education and online health information. It is important for a patient to be able to read and understand the information for better engagement in his healthcare. Patient engagement is essential for meaningful health outcomes and to enhance patient satisfaction. Researchers and clinicians also use written health information to assess patient recovery, satisfaction with care, and areas of improvement in services. Readability formulas are commonly used to evaluate the reading level of written health information which might include test protocols and instructions, home exercise and care instructions, information pamphlets, consent and assent forms, patient-reported outcome measures, and surveys. Easy to read and understand written document allows accurate reporting, active research participation, and enhanced clinical practice.
Despite increased use of the Internet by people and improved availability of online health information, many of these resources remain underutilized as they cannot be read and understood by common people.,,, In the field of health-care research, many informed consent and patient reported outcome measures are above the level of reading ability of a common man , which might yield inaccurate results. Readability is defined as “the sum total (including all the interactions) of all those elements within a given piece of printed material that affect the success a group of readers have with it. The success is the extent to which they understand it, read it at an optimal speed, and find it interesting.”
Many readability formulas such as Flesch, Dale–Chall, and Gunning Fog Index exist for checking the readability of a text in English. A few readability formulas are also available for checking the readability of non-English text, for example, READ-IT  (Italian), Spaulding, modified Fry graph, and Crawford formula  (Spanish), and Lix Readability Formula  (Swedish). Most reading formulas give scores by doing mathematical calculations based on the number of words per se ntence, mean word length, and number of syllables per word. Due to the complexity involved in assessing reading ease, many researchers and health-care providers use multiple readability measures to more accurately evaluate the readability of a given written document. The Flesch formula is most commonly used to assess the readability of written health-care information materials.,,, In this article, we will critically analyze the credibility and role of Flesch formula in assessing the readability of written health information.
The Flesch formula
Rudolph Flesch developed the Flesch Reading Ease (FRE) formula in 1948. Flesch Reading Grade Level (FRGL) formula was built upon in FRE by Kincaid et al. in 197 5 for the US Navy to give a grade level to written material. It is commonly referred as Flesch–Kincaid Reading Grade Level (FKRGL). Both FRE and FKRGL calculate the readability based on two variables: average sentence length (based on the number of words) and average word length (based on the number of syllables).
The validity of Flesch Reading Ease and Flesch–Kincaid Reading Grade Level
Both the FRE formula and FKRGL are valid for measuring the readability of a written text between US Grade 5 and college level., The FRE scores have been validated against 1950 McCall-Crabbs Standard Test in reading lessons, and FRE scores correlate 0.6 with 1950 McCall-Crabbs Standard Test. The FRE scores also correlate highly with Fry (0.96) and Simple Measure of Gobbledygook (0.95) readability formulas. Since comprehension is an important aspect to understand the written findings, the FKRGL has also been validated against the cloze comprehension tests., Since FKRGL is validated against the cloze comprehension tests and they directly give a grade level (as opposed to an estimate in FRE), FKRGL is more commonly used in daily practice.
Both FRE and FKRGL can be calculated manually and electronically. The score derived from FRE ranges from 0 (unreadable) to 100 (very easy to read). FKRGL also gives a score that corresponds to the US grade level. For example, a FKRGL of 8.5 means that the text should be understood by people achieved US Grade 8. To reach people with low levels of literacy, it is suggested that written health-care information materials are to be targeted at Grade level 8 or less in the US , and Grade level 12 in the UK. Interpretations of FRE scores  and estimated grade level are in [Table 1].
Advantages and limitations
As both FRE and FKRGL are readily available within the MS Office suite, they are the most preferred readability formulas. FRE and FKRGL exclude subtitles, captions, and headings. Furthermore, the FRE and FKRGL scores cannot be calculated on tables, charts, and graphics. MS Word recognizes each period as the end of a sentence. Thus, abbreviations, numbers with decimals, and bullet headings might reduce the reading level thus underestimating the grade level.
To understand written health-care information, patients need to go beyond reading and actually comprehend the information. Comprehension depends on various factors such as (a) text layout (title, font, colors, tables, graphics, spacing, and grammar),, (b) patient motivation and prior knowledge of the subject which helps them to derive meaning from the written text, and (c) patient anxiety levels. FRE and FKRGL do not take prior knowledge, motivation, layout, grammar, and graphics into consideration; and rely only on word and sentence length.,
Despite common use of FRE and FKRGL, its validation and scoring algorithm are still debated. Some authors suggest that there has been an inadequate validation of the FRE and FKRGL. It is noteworthy that McCall-Crabbs Standard Test was never intended to be used as a criterion for reading formula. Both medical and lay terminologies and vocabulary have changed greatly over the past 60 years, and 1950 standards of McCall-Crabbs test are outdated now. Importantly, passages used while developing McCall-Crabbs test for comprehension were too short and simple  and do not have the necessary capacity to accurately assess the reading level of a document consisting of complex medical words. Due to the built-in availability within MS Office, the FRE and FKRGL have become a readily available option but provide lower estimates of readability as compared to other reading formulas.
FRE and FKRGL scores are based on word or sentence length, thus including a complex medical word might increase the reading level of the document. However, patients with chronic health conditions may be familiar with the medical terms relating to their health condition. Using a simpler or uncommon word may decrease the grade level but might fail to capture the interest of patients, might undermine their confidence in its credibility, or might not provide the needed in-depth information., Furthermore, in a health information document, keywords are often repeated to reinforce learning, which will lead to inappropriately higher FRE and FKRGL scores for these texts.
Current options for readability formulae
Since the use of graphics influence readability, new readability measures such as Readability Assessment Instrument, Ensuring Quality Information for Patients, and Suitability Assessment of Materials  have been developed in the past decade to assess the effectiveness of diagrams and pictures with text. These measures can be used by health-care providers in conjunction with FRE and FKRGL to develop effective patient education materials. Often home programs and outcomes' measures use a combination of text, tables, and figures; these alternative readability measures may be more valid.
Researchers and clinicians can assess patient literacy and later combine readability formula findings with literacy assessment to develop audience-appropriate education materials. Patient literacy assessment tools such as Rapid Estimate of Adult Literacy in Medicine, Test of Functional Health Literacy in Adults, Brief Questions to Identify Patients With Inadequate health literacy, Newest Vital Sign, Health Activity Literacy Study, and Wide Range Achievement Test-Revised , can be used by clinicians and researchers to assess literacy levels of patients.
Recent scientific developments using computer linguistics have targeted the problems of outdated vocabulary and multiple meanings of the same word faced by FRE and FKRGL. New tools such as ATOS readability formula,, Pearson Reading Maturity Metric, the Coh-Metrix Text Easability Assessor, and TextEvaluator  are online tools that constantly update their word vocabulary. These tools also use complex algorithms and measure semantic, structural, and vocabulary aspects to provide an accurate assessment of reading levels. These new tools have been used widely in classrooms by teachers to assess the reading level of books and passages in English; however, utilization and utility in health research are subject to further research.
| Discussion|| |
Assessment of reading grade level of written documents is a complex issue. It is influenced by factors related to the document (layout, color, font, spacing, legibility, and grammar); person (education, comprehension, health literacy, motivation, prior knowledge, information needs, anxiety levels); and style of writing (cultural sensitivity, context, comprehensiveness, and appropriateness). Reading formulas are inadequate objective measures but are being used as quick indicators to assist writers in targeting information to the lay public. Sole reliance on FRE and FKRGL or other reading formulas to assess the grade level can be misleading. It is important for HCPs and researchers to move beyond assessing reading grade level–simply can the patient read the document – toward assessing patient's understanding of the document. Assessing understanding is a complex process and is influenced by many intertwined variables. To develop a written document that is engaging to the reader, focuses on the key information, and is technically correct requires consideration of multiple factors related to the document, the target audience, and the information being provided.
Multiple reading formulas exist; however, there is a lack of culturally sensitive, disease specific, and context-based readability measures. Future research should focus on developing generic and disease-specific online readability measures that can assess the comprehension of a written document based on individuals' literacy levels, cultural background, cognitive ability, vocabulary, and knowledge of the disease. Future readability measures should also be able to assess reading level of a document containing pictures and charts. Developing and using a measure that is customized to literacy levels, cultural background, and the disease knowledge might also help solve the ongoing problem of health-care information being written in a grade level higher than recommended level.
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
| References|| |
Cline RJ, Haynes KM. Consumer health information seeking on the internet: The state of the art. Health Educ Res 2001;16:671-92.
Baker L, Wagner TH, Singer S, Bundorf MK. Use of the internet and e-mail for health care information: Results from a national survey. JAMA 2003;289:2400-6.
Diaz JA, Griffith RA, Ng JJ, Reinert SE, Friedmann PD, Moulton AW. Patients' use of the Internet for medical information. J Gen Intern Med 2002;17:180-5.
Powell JA, Darvell M, Gray JA. The doctor, the patient and the world-wide web: How the internet is changing healthcare. J R Soc Med 2003;96:74-6.
Eysenbach G, Kohler CH. What is the prevalence of health-related searches on the World Wide Web? Qualitative and quantitative analysis of search engine queries on the internet. AMIA Annu Symp Proc 2003;225-9.
Spink A, Yang Y, Jansen J, Nykanen P, Lorence DP, Ozmutlu S, et al.
Astudy of medical and health queries to web search engines. Health Info Libr J 2004;21:44-51.
Hamm MP, Chisholm A, Shulhan J, Milne A, Scott SD, Given LM, et al.
Social media use among patients and caregivers: A scoping review. BMJ Open 2013;3. pii: E002819.
Griffiths F, Lindenmeyer A, Powell J, Lowe P, Thorogood M. Why are health care interventions delivered over the internet? A systematic review of the published literature. J Med Internet Res 2006;8:e10.
Jindal P, MacDermid J. Type and extent of knowledge translation resources published by peer-reviewed rehabilitation journals. Crit Rev Phys Rehabil Med 2015;27:105-22.
Coulter A. Patient engagement – What works? J Ambul Care Manage 2012;35:80-9.
Walsh TM, Volsko TA. Readability assessment of internet-based consumer health information. Respir Care 2008;53:1310-5.
Cotugna N, Vickery CE, Carpenter-Haefele KM. Evaluation of literacy level of patient education pages in health-related journals. J Community Health 2005;30:213-9.
McInnes N, Haglund BJ. Readability of online health information: Implications for health literacy. Inform Health Soc Care 2011;36:173-89.
Agarwal N, Hansberry DR, Sabourin V, Tomei KL, Prestigiacomo CJ. A comparative analysis of the quality of patient education materials from medical specialties. JAMA Intern Med 2013;173:1257-9.
Paz SH, Liu H, Fongwa MN, Morales LS, Hays RD. Readability estimates for commonly used health-related quality of life surveys. Qual Life Res 2009;18:889-900.
Paasche-Orlow MK, Taylor HA, Brancati FL. Readability standards for informed-consent forms as compared with actual readability. N
Engl J Med 2003;348:721-6.
Dale E, Chall JS. The concept of readability. Natl Counc Teach Engl 1949;26:19-26.
Ley P, Florio T. The use of readability formulas in health care. Psychol Health Med 1996;1:7-28.
Read-It: Assessing Readability of Italian Texts with a View to Text Simplification. Proceedings of the Second Workshop on Speech and Language Processing for Assistive Technologies: Association for Computational Linguistics; 2011.
Parker I Richard, Hasbrouck JE, Weaver L, Spanish readability formulas for elementary-level texts: A validation study. Read Writ Q 2001;17:307-22.
Hansberry DR, Agarwal N, Baker SR. Health literacy and online educational resources: An opportunity to educate patients. AJR Am J Roentgenol 2015;204:111-6.
D'Alessandro DM, Kingsley P, Johnson-West J. The readability of pediatric patient education materials on the World Wide Web. Arch Pediatr Adolesc Med 2001;155:807-12.
Flesch R. A new readability yardstick. J Appl Psychol 1948;32:221-33.
Kincaid JP, Fishburne RP, Rogers RL, Chissom BS. Derivation of New Readability Formulas (Automated Readability Index, Fog Count and Flesch Reading Ease Formula) for Navy Enlisted Personnel. Naval Technical Training, U. S. Naval Air Station; 1975. p. 1-48.
Wang LW, Miller MJ, Schmitt MR, Wen FK. Assessing readability formula differences with written health information materials: Application, results, and recommendations. Res Social Adm Pharm 2013;9:503-16.
Jacabson DM, Kirkland E, Selden WR. An examination of the McCall-Crabbs standard test lessons in reading. J Read 1978;22:224-30.
DuBay WH. The Principles of Readability. California: Impact Information; 2004.
Meade CD, Smith CF. Readability formulas: Cautions and criteria. Patient Educ Couns 1991;17:153-8.
Taylor WL. “Cloze procedure”: A new tool for measuring readability. Journal Q 1953;30:415.
Doak CC, Doak LG, Root JH. Teaching Patients with Low Literacy Skills. 2, Illustrated Edition. Philadelphia: J.B. Lippincott; 1996.
Kutner M, Greenberg E, Jin Y, Paulsen C. The Health Literacy of America's Adults: Results From the 2003 National Assessment of Adult Literacy. U.S. Department of Education; 2006. p. 1-60.
Stockmeyer NO. Using Microsoft word's readability program. Mich Bar J 2009;88:46.
Friedman DB, Hoffman-Goetz L. A systematic review of readability and comprehension instruments used for print and web-based cancer information. Health Educ Behav 2006;33:352-73.
Cherla DV, Sanghvi S, Choudhry OJ, Liu JK, Eloy JA. Readability assessment of internet-based patient education materials related to endoscopic sinus surgery. Laryngoscope 2012;122:1649-54.
Bernardini C, Ambrogi V, Fardella G, Perioli L, Grandolini G. How to improve the readability of the patient package leaflet: A survey on the use of colour, print size and layout. Pharmacol Res 2001;43:437-44.
Bailin A, Grafstein A. The linguistic assumptions underlying readability formulae: A critique. Lang Commun 2001;21:285-301.
Estey A, Musseau A, Keehn L. Patient's understanding of health information: A multihospital comparison. Patient Educ Couns 1994;24:73-8.
Pringle MB, Natesh BG, Konieczny KM. Patient information leaflet on mastoid surgery risks: Assessment of readability and patient understanding. J Laryngol Otol 2013;127:1078-83.
Stevens KC. Readability formulae and McCall-Crabbs standard test lessons in reading. Read Teach 1980;33:4:413-15.
Wilson M. Readability and patient education materials used for low-income populations. Clin Nurse Spec 2009;23:33-40.
Green GM, Olsen MS. Preferences for and Comprehension of Original and Readability-Adapted Materials. University of_ Illinois at Urbana-Champaign; 1986; Technical Report 393.
Singh J. Readability assessment instrument user's manual. 2nd
ed. Midlothian, VA: ONE Publications; 2005.
Moult B, Franck LS, Brady H. Ensuring quality information for patients: Development and preliminary validation of a new instrument to improve the quality of written health care information. Health Expect 2004;7:165-75.
Suitability Assessment of Materials (SAM). American Public Health Association Annual Meeting; 1994.
Murphy PW, Davis TC, Long SW, Jackson RH, Decker BC. Rapid estimate of adult literacy in medicine (REALM): A quick reading test for patients. J Read 1993;37:124-30.
Parker RM, Baker DW, Williams MV, Nurss JR. The test of functional health literacy in adults: A new instrument for measuring patients' literacy skills. J Gen Intern Med 1995;10:537-41.
Chew LD, Bradley KA, Boyko EJ. Brief questions to identify patients with inadequate health literacy. Fam Med 2004;36:588-94.
Weiss BD, Mays MZ, Martz W, Castro KM, DeWalt DA, Pignone MP, et al.
Quick assessment of literacy in primary care: The newest vital sign. Ann Fam Med 2005;3:514-22.
Rudd RE. Health literacy skills of U.S. adults. Am J Health Behav 2007;31 Suppl 1:S8-18.
Robertson GJ. Wide-Range achievement test. Corsini Encyclopedia of Psychology. Wiley Online library; 2010. Vol. 1-2.
Reid N. Wide Range achievement test: 1984 revised edition. J Couns Dev 1986;64:538-9.
McNamara DS, Graesser AC. Coh-Metrix: An Automated Tool for Theoretical and Applied Natural Language Processing. Applied Natural Language Processing and Content Analysis: Identification, Investigation, and Resolution. Hershey, PA: IGI Global; 2012.
Sheehan KM, Kostin I, Napolitano D, Flor M. The text evaluator tool. Elem Sch J 2014;115:184-209.
|This article has been cited by|
||Communications in the time of a pandemic: the readability of documents for public consumption
| ||Catherine Ferguson,Margaret Merga,Stephen Winn |
| ||Australian and New Zealand Journal of Public Health. 2021; |
|[Pubmed] | [DOI]|
||Legibilidad de los consentimientos informados en cirugía vascular y análisis de su evolución en el tiempo
| ||E. García Rivera,E.M. San Norberto,L. Fidalgo Domingos,N. Cenizo Revuelta,I. Estévez Fernández,C. Vaquero Puerta |
| ||Journal of Healthcare Quality Research. 2020; |
|[Pubmed] | [DOI]|
||Stakeholder involvement in the development of trial material for a clinical trial
| ||Jacqueline Rix,Jonathan Branney,Alexander C. Breen,Philip Sewell,Sharon Docherty |
| ||Health Expectations. 2020; |
|[Pubmed] | [DOI]|
||Employing computational linguistics techniques to identify limited patient health literacy: Findings from the ECLIPPSE study
| ||Dean Schillinger,Renu Balyan,Scott A. Crossley,Danielle S. McNamara,Jennifer Y. Liu,Andrew J. Karter |
| ||Health Services Research. 2020; |
|[Pubmed] | [DOI]|
||Development and testing of a guideline document to provide essential information for patient decision making regarding cancer clinical trials
| ||Chi-Yin Kao,Sanchia Aranda,Meinir Krishnasamy,Bridget Hamilton |
| ||European Journal of Cancer Care. 2020; |
|[Pubmed] | [DOI]|
||Assessing how information is packaged in rapid reviews for policy-makers and other stakeholders: a cross-sectional study
| ||Chantelle Garritty,Candyce Hamel,Mona Hersi,Claire Butler,Zarah Monfaredi,Adrienne Stevens,Barbara Nussbaumer-Streit,Wei Cheng,David Moher |
| ||Health Research Policy and Systems. 2020; 18(1) |
|[Pubmed] | [DOI]|
||Fair-y Tales: An analysis of children’s books about sun safety
| ||Lindsay McCormack,Fiatsogbe S. Dzuali,Anna Tappel,Jennifer T. Huang |
| ||Pediatric Dermatology. 2020; |
|[Pubmed] | [DOI]|
||A Readability Analysis of Online Cardiovascular Disease-Related Health Education Materials
| ||Varun Ayyaswami,Divya Padmanabhan,Manthan Patel,Arpan Vaikunth Prabhu,David R. Hansberry,Nitin Agarwal,Jared W. Magnani |
| ||HLRP: Health Literacy Research and Practice. 2019; 3(2): e75 |
|[Pubmed] | [DOI]|
||Assessment of quality of information available over the internet about vegan diet
| ||Olivia Genevieve El Jassar,Isobel Nadia El Jassar,Evangelos I. Kritsotakis |
| ||Nutrition & Food Science. 2019; |
|[Pubmed] | [DOI]|
ealth tool for patients with abdominal aortic aneurysm: development and initial evaluation
| ||Olga Nilsson,Rebecka Hultgren,Anna Letterstål |
| ||Scandinavian Journal of Caring Sciences. 2019; |
|[Pubmed] | [DOI]|
||What is the quality of drug safety information for patients: An analysis of REMS educational materials
| ||Hilda W. Chan,Andrea M. Russell,Meredith Y. Smith |
| ||Pharmacoepidemiology and Drug Safety. 2018; |
|[Pubmed] | [DOI]|
||Readability of endoscopy information leaflets: Implications for informed consent
| ||Matthew C. Mason,James M.L. Williamson |
| ||International Journal of Clinical Practice. 2018; : e13099 |
|[Pubmed] | [DOI]|