Print this page Email this page Users Online: 5 | Click here to view old website
Home About us Editorial Board Search Current Issue Archives Submit Article Author Instructions Contact Us Login 


 
 Table of Contents  
ORIGINAL RESEARCH PAPER
Year : 2010  |  Volume : 23  |  Issue : 3  |  Page : 466

Longitudinal Development of Medical Students' Communication Skills in Interpreted Encounters


1 University of California, Irvine, California, USA
2 Stanford University, Stanford, California, USA

Date of Submission27-Mar-2010
Date of Acceptance27-Oct-2010
Date of Web Publication30-Nov-2010

Correspondence Address:
D A Lie
Ste 512, 101 The City Dr. S, Orange, CA 92868
USA
Login to access the Email id

Source of Support: None, Conflict of Interest: None


PMID: 21290365

  Abstract 

Objective: Describe longitudinal skill development of medical students for the interpreted encounter.
Method: Two successive classes of students (n=92 and 100) participated in standardized clinical stations testing general communication skills and skills for working with interpreters at the end of their second year and after completing clinical clerkships during their third year. Performance was rated by standardized patients, interpreters and students using validated scales.
Analysis: Analysis of individual matched paired data was performed for each scale item using the Wilcoxon signed-rank test. Pairwise correlation was used to compare global scores of the standardized patient and standardized interpreter with student self-ratings.
Results: Over one year students' (n=124-168) performance worsened in behaviors for 'managing the encounter' (per interpreters' ratings) or remained unchanged (per patients' ratings). By patients' ratings, performance scores in general communication remained high. Students rated themselves as significantly improved in five of eight skills for working with interpreters despite a lack of external evidence of improvement from patient or interpreter. Students showed a trend toward underestimating their own global skills at baseline and overestimating them in comparison with the interpreters' global ratings.
Discussion: Students' general communication skills remained excellent over one year of training but some skills for working with interpreters worsened. Over time students showed a pattern of overrating their own skills compared with trained observers. Faculty who teach students should focus on specific behaviors that are most likely to decay without reinforcement and practice.

Keywords: Communication, cultural competence, interpreters, medical students, assessment, medical education, limited English proficiency, language access


How to cite this article:
Lie D A, Bereknyei S, Vega C P. Longitudinal Development of Medical Students' Communication Skills in Interpreted Encounters. Educ Health 2010;23:466

How to cite this URL:
Lie D A, Bereknyei S, Vega C P. Longitudinal Development of Medical Students' Communication Skills in Interpreted Encounters. Educ Health [serial online] 2010 [cited 2020 Oct 25];23:466. Available from: https://www.educationforhealth.net/text.asp?2010/23/3/466/101468

Background and Objectives



In the United States (US) within a predominantly English-speaking environment, patients with limited English proficiency have worse clinical outcomes and access to care compared with English-proficient patients1-4. At the same time, there is good evidence that improving language access through the use of interpreters increases patient satisfaction and improves health outcomes5,6. An increasing number of policy actions in the US mandate minimum standards for language access for patients and allocation of resources directed at meeting these requirements8–13.



Cultural competence curricula highlight the need for learners, including medical students, residents, physician assistant and nursing students to be trained in working effectively with interpreters14–17. The Association of American Medical Colleges’ Tool for Assessing Cultural Competency Training describes a skill set for medical students (MS) that comprises the knowledge and skills of working effectively with interpreters as a distinct learning entity17–19. And yet, even though standards for medical interpreters’ performance in clinical encounters are available20–22, the corresponding standards for providers to work effectively with interpreters to optimize patient satisfaction and health outcomes have not been adequately described. Validated tools to assess these skills14,23,24 were developed for medical student and nurse practitioner student training in a standardized clinical encounter with standardized patients and interpreters. The question remains as to what curricular elements need to be addressed by faculty to ensure that the appropriate skills are attained and maintained during training.



Many medical school clerkships now include the teaching of cross-cultural care, and working with interpreters in a triadic clinical setting is an important part of this curriculum. Additionally, training programs often incorporate clinical practice examinations, using standardized patients to test clinical and communication skills. At the University of California, Irvine (UCI), we tracked communication skills of two classes of medical students from year 2 to 3 of training, with the goal of helping faculty identify elements of communication that require formal teaching and feedback, especially when caring for patients with limited English proficiency.



We hypothesized that students will show improvement in objective ratings of communication skills in the interpreted encounter over one year of clinical training. Our objective was to identify which behaviors improved and which did not so that faculty can focus on the requisite skills when teaching about language-discordant encounters. The institution’s human subjects review board approved the study.



Methods



Study participants



Participants were 192 second year medical students (MS 2) at one US school consisting of two successive classes (graduating in 2009 and 2010, n=92, and 100 respectively).



Clinical Practice Examination (CPX) Interpreter Station



Each class participated in a standardized clinical station as part of a multi-station CPX at baseline (end of year 2) and at the end of year three after completion of all required clerkships. There is thus a one year interval between baseline and follow-up testing. The baseline CPX was part of a formative end-of-year assessment for an 18-month Doctoring course. Details of the 15 minute interpreter station testing interview and counseling skills for smoking cessation have been previously described23,24. Spanish was chosen because it was the predominant language of monolingual patients encountered in the school’s teaching clinics, while Vietnamese was the next commonly encountered language in the community. Medical students were informed just before the encounter that the station assessed their ability to communicate with a monolingual non-English speaking patient through an interpreter, and not their language proficiency. They were instructed not to communicate in the patient’s language even if they were slightly proficient in Spanish. Medical students proficient in Spanish were assigned to a Vietnamese-speaking patient. The same station was administered in year 3 one year later, using the same rating measures and the same patient and interpreter raters. Standardization of case performance, patient and interpreter ratings was rigorous and previously described23,24. The standardized patients and interpreters were trained using the same gold standard training videotapes for both time-points and did not have higher expectations of performance for third compared with second year students. Students were unaware at baseline that they would be re-tested in one year.



Interval curriculum



Students received a brief didactic curriculum about working with interpreters, consisting of readings and an online one-hour multimedia web module25 before their baseline CPX. Immediately after the baseline CPX station students were provided with their performance scores and general group feedback in a small group setting (five to eight students per group) but no personal feedback. During clerkships in year 3, students worked variably with interpreters but they did not receive any more formal training or feedback. Students experienced block clerkships (four to six weeks per block) in the core disciplines including Internal Medicine, Family Medicine, Obstetrics and Gynecology, Pediatrics, Psychiatry, Emergency Medicine and Surgery. Formal teaching about cross-cultural medicine occurred in the Family Medicine and Pediatrics clerkships consisting of two 2-hour and one 1-hour sessions respectively. No formal teaching about working with interpreters was provided during the clerkship year. In each clerkship, clerkship directors independently assessed students and a passing grade was required before moving on to the next clerkship.



Assessment Tools



We examined performance in general communication using the validated Patient Physician Interaction Scale (PPI) administered by the standardized patient26. Skills for working with the interpreter were rated by the standardized patient using the validated Interpreter Impact Rating Scale or IIRS22 and by the standardized interpreter using the validated Interpreter Scale or IS(24).The IIRS and IS have been validated for interpreted encounters (that is, language-discordant encounters) in Spanish, Vietnamese, Russian and Chinese23,24. All students completed a written self-assessment outside the room, after the encounter.



Rating Scales



The PPI is a 7-item rating scale using a 6-point Likert scale (ranging from 1=unacceptable to 6=outstanding) to measure the competency of communication in the context of professional behavior (see table 1a for scale items). The mean of the summed score of the PPI (maximum of seven) was used to rate overall communication skills.



Table 1a. Medical student general communication skills rated by the patient using the Patient Physician Interaction (PPI) (University of California, Irvine 2006-2009), no significant differences







The IIRS is a 7-item measure23 assessing communication in an interpreted encounter from the patient’s perspective that includes both verbal and non-verbal behaviors (see table 1b). A 5-point Likert scale (from 1=marginal/low to 5=outstanding performance) is used to rate each behavior. The global measure rating overall satisfaction of the patient, item 7, was previously found to be predictive of overall performance23.



Table 1b. Medical student performance rated by the patient – all scale items of the Interpreter Impact Rating Scale (IIRS) (University of California, Irvine 2006-2009), no significant differences







The IS is a 13-item measure rating provider communication performance in an interpreted encounter from the interpreter’s perspective, using a 5-point Likert scale for each behavior24. There are 12 behavior items with the first four rating the ability of the provider to ‘set the stage’ for the encounter, and the remaining eight items rating verbal and non-verbal behaviors for ‘managing the encounter’ (see table 1c). The global item 13 asks about overall satisfaction and was previously found to be predictive of overall performance24.



Table 1c. Medical student performance rated by the interpreter using the Interpreter Scale (IS) – all items (University of California, Irvine 2006-2009), significant differences in bold







For the self-assessment (SA), medical students were asked to rate their own skills for eight items (see table 1d) on a 6-point Likert scale (from 1=very poor to 6=excellent).



Table 1d. Medical student Self-Assessment (SA) (University of California, Irvine 2006-2009), significant differences in bold







Data collection



Patients and interpreters entered trainee performance scores electronically using the WebSP® program immediately after each encounter and data were centrally collected. SA data were completed electronically outside the room immediately after the encounter.



Statistical analysis



The original metric of each scale varied over the time period of the study from a binary response (for two scale items) to a 5- or 6-point Likert scale. In order to represent the data in a uniform way, we standardized the response scale to range between 0 and 1, keeping the original binary, 5- and 6-point Likert scales. For example, if a student earned 2 out of 5 points on the 5-point scale, then the score in our analysis would be 0.25. We calculated differences in score over the two time points for each student for each item. Since differences in score over time did not follow a normal distribution, we tested differences between baseline and final scores for significance using the paired Wilcoxon signed-rank test. To detect if there were differences in performance by cohort over the two years we compared group performance across the scales to determine time point or gender differences in ratings using the Wilcoxon signed-rank test. Finally, we analyzed cross-scale correlations using the non-parametric Spearman’s rho for the following comparable items: patient and interpreter global satisfaction ratings and student scores of their own performance. We performed all statistical analyses using STATA 10.1 for Macintosh (Stata Corporation, College Station, TX, www.stata.com).



Results



Study Participants and Data



The two classes did not differ significantly in demographics of gender (50% male), age (mean age 24 years in year 2 for both cohorts) or ethnicity (44% white, 40% Asian, 10% Hispanic). All students completed the CPX interpreter station with paired performance data available for 166 (PPI); 168 (IIRS); 168 (IS); and 124 (SA) students. Paired datasets available for analysis were lower than 192 because 24 students deferred their clerkship year; we thus did not have data for a second encounter for this group.



Each validated scale (PPI, IIRS and IS) had acceptable scale reliability over both time periods with Cronbach’s α ranging from 0.741 to 0.943 (data not presented).



Longitudinal Medical Student Performance (see tables 1a to 1d)



Among students with paired individual data, performance as rated by the patient did not change significantly for the seven PPI items, the summed PPI score, or the seven items of the IIRS (see tables 1a and 1b). Ratings by the interpreter showed improvement in two behaviors associated with ‘setting the stage’: ‘introduced himself/herself to me’ and ‘adequately explained the purpose of the interview’; and worsened for five behaviors associated with ‘managing the encounter’: ‘asked one question at a time’, ‘listened …without unnecessary interruption’, ‘asked the patient if there were any other questions’, ‘maintained eye contact with the patient..’, and ‘addressed the patient in the first person’, of a total of 13 items (see table 1c).



Self-ratings by students showed improvement in five out of eight items (see table 1d). The improved items were: ‘setting limits for interpreter’s role’, ‘focusing on patient instead of interpreter…’, ‘preserving patient confidentiality’, ‘perceived patient satisfaction’, and ‘perceived interpreter satisfaction’. Students rated the difficulty of the encounter ‘compared to one with a patient fluent in English’ as more difficult over the two time points, although this difference was not statistically significant.



Of note, the global satisfaction items for the patient and interpreter showed no improvement over time whereas the corresponding satisfaction ratings from the student’s self-assessment showed significant improvement.



The global satisfaction scores between the patient and interpreter were moderately correlated at baseline (Spearman’s rho = 0.351, p-value <0.001) and after one year (Spearman’s rho = 0.377, p =0.001). Students as a group underestimated interpreter ratings of overall satisfaction at baseline (Spearman’s rho = -0.167, p = 0.018) and overestimated interpreter overall satisfaction in year 3, although this was not statistically significant. No significant differences in scores for any item were observed by student gender or time for each scale (data not shown).



Conclusions



We examined developmental progression of communication skills when working with interpreters in language-discordant patient-physician encounters, among medical students. Our hypothesis that students will show objective improvement in these skills over training was not met. In individual paired comparisons, although students rated themselves as significantly improved in five of eight behaviors in an interpreted encounter over one year of clinical training, this was not reflected in concurrent external ratings by patients or interpreters. Thus we observed a disconnect between self-rating and ratings by trained raters. The finding of lower performance in communication skills during clinical (year 3) compared with preclinical (year 2) training has previously been reported27. It is possible that the didactic teaching method at baseline was inadequate to impart and sustain skills necessary for working effectively with interpreters. Similarly the finding that physicians and novice trainees overrate their own skills is well documented in the literature and not surprising in itself28,29.



The discordance in rating between students and their raters is of concern in this particular longitudinal study because we used exactly the same station for the baseline and follow-up assessment. Our students had the opportunity at baseline, after the CPX station, to discuss and address skills pertinent to communication when interpreters were present. But not only did they not objectively improve one year later, on the contrary, they believed some of their skills had improved. One way to explain the self-assessment findings is the natural social tendency to expect and assume one’s own skills to improve during training. We speculate that lack of reinforcement of skills through formal teaching, practice, direct observation with feedback and, possibly, exposure to negative role-modeling during clerkships may have contributed to the lack of objective improvement. The finding that scores from patients reflecting general communication skills remained high over the same period somewhat mitigates the finding of lack of improvement in skills specific to working with interpreters, and likely confirms that skills for working with interpreters represent a distinct subset of communication skills.



The behaviors that showed worsening by interpreter ratings (‘asked one question at a time’, ‘listened …without unnecessary interruption’, ‘asked.. if there were any other questions’, ‘and ‘addressed patient in the first person’) are amenable to faculty observation with feedback for reinforcement and maintenance. Faculty who teach students communication skills may wish to pay attention to and emphasize these elements of behavior when giving feedback in encounters involving interpreters.



Our study has several strengths. We used the same standardized station, raters and validated assessment tools and combined several perspectives (patient, interpreter, student) to obtain a multi-rater view of developmental performance. We acknowledge that patients and interpreters view students differently as reflected in the global item scores. This is likely attributed to the different perspectives of these two raters: interpreters primarily as observers and patients as active invested participants subject to emotional, interpersonal and other nuances. We used paired individual data for comparison rather than aggregated class data, and thus obtained valid performance data that accurately reflects longitudinal skill development, another strength of the study.



Our study has some limitations. We tested performance in a standardized setting and it is uncertain how well this reflects performance in actual practice. Second, patient raters were more generous in their assessment of students’ skills than interpreter raters despite concurrent and rigorous training as a rater pair. However, even with this difference, none of the students’ scores on the interpreter scale items improved from baseline to final assessment. Our findings were limited by the use of a single case due to resource limitations since other clinical skills also had to be tested in the CPX. But we counteracted this shortcoming by examining data from two successive classes with similar demographics and experiencing similar curricula. Lastly, our study did not examine reasons why students may have worsened in their skills over time – a follow-up survey or focus group, or direct observation in clinic during clerkships, may better identify underlying factors.



We propose that having increased access to trained medical interpreters is only the first step in improving language access. We recommend that students be adequately trained to work effectively with interpreters30,31 as part of their preparation for residency, given changing needs for linguistic competency. The clerkship is often a realistic setting in which such instruction may be implemented, since teaching often occurs in underserved settings with patients who are language-discordant with their providers, and cross-cultural medicine curricula are already an integral part of many Family Medicine clerkships. Our findings have several implications. First, skills for working effectively with interpreters do not necessarily track with general communication skills, suggesting that they have to be taught and learnt separately. Second, without a formal curriculum, combining didactics and direct observation with feedback, these skills do not naturally improve over time with clinical exposure. Third, there is a potential for learnt skills to deteriorate without reinforcement or with exposure to negative role models. We conclude that skills for working with interpreters are an important and independent subset of communication skills, and they need to be formally taught, reinforced and assessed in trainees. Our experience, albeit limited to one medical school and to the Spanish and Vietnamese languages in the context of an English-based setting, suggests that involving interpreters as part of the health care team in the training could help to reinforce desired behaviors32,33. Our study provides a useful starting point for educators to conceive teaching strategies to identify communication weaknesses in triadic, language-discordant clinical encounters involving interpreters. Future studies will examine the effectiveness of curricula to improve skills for working with interpreters, using real-time clinical encounters with trained interpreters as raters, to validate our findings from standardized encounters with the goal of reducing health disparities that arise from the language divide34,35 .



Acknowledgements



This project was supported by a grant from the National Institutes of Health (NIH), National Heart, Lung and Blood Institute, award# K07 HL079256-01 “An Integrative, Evidence-based Model of Cultural Competency Training in Latino Health across the Continuum of Medical Education” (2004-11), the Association of American Medical Colleges (AAMC) grant initiative "Enhancing Cultural Competence in Medical Schools" (2005-9) to UC Irvine by the California Endowment. The contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIH or AAMC.



References



1. Bard MR, Goettler CE, Schenarts PJ, Collins BA, Toschlog EA, Sagraves SE, Rotondo MF. Language barrier leads to the unnecessary intubation of trauma patients. The American Surgeon. 2004; 70(9):783-786.



2. Carrasquillo O, Orav EJ, Brennan TA, Burstin HR. Impact of language barriers on patient satisfaction in an emergency department. Journal of General Internal Medicine. 1999; 14:82-87.



3. Sarver J, Baker DW. Effect of language barriers on follow-up appointments after an emergency department visit. Journal of General Internal Medicine. 2000; 15:256-264.



4. Cheng E, Chen A, Cunningham W. Primary language and receipt of recommended health care among Hispanics in the United States. Journal of General Internal Medicine. 2007; 22:283-288.



5. Ngo-Metzger Q, Sorkin DH, Phillips RS, et al. Providing high-quality care for limited English proficient patients: The importance of language concordance and interpreter use. Journal of General Internal Medicine. 2007; 22(Suppl 2): 324-330.



6. Jacob EA, Sadowski LS, Rathouz PJ. The impact of an enhanced interpreter service intervention on hospital costs and patient satisfaction. Journal of General Internal Medicine. 2007; 22(Suppl 2):306-311.



8. Youdelman MK. The medical tongue: U.S. laws and policies on language access. Health Affairs (Millwood). 2008; 27:424-433.



9. Chen A, Youdelman M, Brooks J. The legal framework for language access in healthcare settings: Title VI and beyond. Journal of General Internal Medicine. 2007; 22:362-367.



10. The George Washington University, School of Public Health and Health Services and the Robert Wood Johnson Foundation. Speaking together: National language services network. Speaking Together - National Language Services Network Web site. Retrieved October 1, 2010 from http://www.speakingtogether.org/.



11. U.S. Department of Health and Human Services, Office of Minority Health. National standards for culturally and linguistically appropriate services in health care. Washington, DC: 2000 Retrieved October 1, 2010 from http://www.omhrc.gov/assets/pdf/checked/finalreport.pdf.



12. California Language Assistance Bill, Senate Bill 853. Available from: http://www.leginfo.ca.gov/pub/03-04/bill/sen/sb_0851-0900/sb_853_bill_20031009_chaptered.html and DMHC 1300.67.04 Regulations for Language Assistance Program http://www.hmohelp.ca.gov/library/reports/news/lart.pdf. Accessed October 1, 2010.



13. Flores G. The impact of medical interpreter services on the quality of health care: A systematic review. Medical Care Research Review. 2005; 62:255-299.



14. Phillips S, Lie D, Encinas J, Tiso S, Ahearn S. Effective Use of Interpreters by Family Nurse Practitioner Students: Is Didactic Curriculum Enough? Journal of the American Academy of Nurse Practitioners. In press.



15. Kalet AL, Mukherjee D, Felix K, Steinberg SE, Nachbar M, Lee A, Changrani J, Gany F. Can a web-based curriculum improve students' knowledge of, and attitudes about, the interpreted medical interview? Journal of General Internal Medicine. 2005; 20:929-934.



16. Zabar S, Hanley K, Kachur E, Stevens D, Schwartz M, Pearlman E, Adams J, Felix K, Lipkin M, Kalet A. "Oh! she doesn't speak English!" assessing resident competence in managing linguistic and cultural barriers. Journal of General Internal Medicine. 2006; 21(5):510-513.



17. Marion GS, Hildebrandt CA, Davis SW, Marín AJ, Crandall SJ. Working effectively with interpreters: a model curriculum for physician assistant students. Medical Teacher. 2008; 30(6):612-617.



18. American Association of Medical Colleges. Tool for Assessing Cultural Competence Training (TACCT). Retrieved October 1, 2010 from https://www.aamc.org/download/54344/data/tacct_pdf.pdf



19. Lie D, Boker J, Cleveland E. Using the tool for assessing cultural competence training (TACCT) to measure faculty and medical student perceptions of cultural competence instruction in the first three years of the curriculum. Academic Medicine. 2006; 81:557-564.



20. Lie DA, Boker J, Crandall S, DeGannes CN, Elliot D, Henderson P, Kodjo C, Seng L. Revising the Tool for Assessing Cultural Competence Training (TACCT) for curriculum evaluation: Findings derived from seven US schools and expert consensus. Medical Education Online. July 2008; Retrieved October 1, 2010 from http://www.med-ed-online.org/volume13.php.



21. Laws MB, Heckscher R, Mayo SJ, Li W, Wilson IB. A new method for evaluating the quality of medical interpretation. Medical Care. 2004; 42(1):71-80.



22. Moreno M, Otero-Sabogal R, Newman J. Assessing dual-role staff-interpreter linguistic competency in an integrated healthcare system. Journal of General Internal Medicine. 2007; 22:331-335.



23. Lie D, Boker J, Bereknyei S, Ahearn S, Fesko C, Lenahan P. Validating measures of third year medical students’ use of interpreters by standardized patients and faculty observers. Journal of General Internal Medicine. 2007; 22 (Suppl 2):336-340.



24. Lie, DA, Bereknyei, S, Braddock, C, Encinas, J, Ahearn, S and Boker, J. Assessing Medical Students' Skills in Working With Interpreters during Patient Encounters: A Validation Study of the Interpreter Scale, Academic Medicine, 2009, 84(5):643-650.



25. Lie DA, Bereknyei S, Kalet A, Braddock CH. Learning outcomes of a web-module to teach interpreter interaction skills to pre-clerkship students. Family Medicine 2009, 41(4)234-235.



26. Makoul G. Essential elements of communication in medical encounters: the Kalamazoo consensus statement. Academic Medicine. 2001; 76(4):390-393.



27. Prislin MD, Giglio M, Lewis EM, Ahearn S, Radecki S. Assessing the acquisition of core clinical skills through the use of serial standardized patient assessments. Academic Medicine. 2000; 75:480-483.



28. Davis DA, Mazmanian PE, Fordis M, Van HR, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: a systematic review. Journal of the American medical Association. 2006; 296(9):1094-102.



29. Hodges B, Regehr G, Martin D. Difficulties in recognizing one's own incompetence: novice physicians who are unskilled and unaware of it. Academic Medicine. 2001; 76(10 Suppl):S87-S89.



30. Karliner LS, Perez-Stable EJ, Gildengorin G. The language divide. the importance of training in the use of interpreters for outpatient practice. Journal of General Internal Medicine. 2004; 19(2):175-183.



31. Schyve P. Language differences as a barrier to quality and safety in health care: The joint commission perspective. Journal of General Internal Medicine. 2007; 22:360-361.



32. Hudelson P. Improving patient-provider communication: Insights from interpreters. Family Practice. 2005; 22(3):311-316.



33. Wu AC, Leventhal JM, Ortiz J, Gonzalez EE, Forsyth B. The interpreter as cultural educator of residents: Improving communication for Latino parents. Archives of Pediatric Adolescent Medicine. 2006; 160(11):1145-1150.



34. Saha S, Fernandez A, Perez-Stable E. Reducing language barriers and racial/ethnic disparities in health care: An investment in our future. Journal of General Internal Medicine. 2007; 22:371-372.



35. Gregg J, Saha SE. Communicative competence: A framework for understanding language barriers in health care. Journal of General Internal Medicine. 2007; 22:368-370.




 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract

 Article Access Statistics
    Viewed7328    
    Printed152    
    Emailed2    
    PDF Downloaded279    
    Comments [Add]    

Recommend this journal