Print this page Email this page Users Online: 801 | Click here to view old website
Home About us Editorial Board Search Current Issue Archives Submit Article Author Instructions Contact Us Login 


 
 Table of Contents  
ORIGINAL RESEARCH ARTICLE
Year : 2015  |  Volume : 28  |  Issue : 1  |  Page : 16-21

Small group learning: Effect on item analysis and accuracy of self-assessment of medical students


1 Biochemistry, LNMC, Bhopal, Madhya Pradesh, India
2 Pathology, LNMC, Bhopal, Madhya Pradesh, India

Date of Web Publication31-Jul-2015

Correspondence Address:
Shubho Subrata Biswas
Department of Biochemisty, LN Medical College, Bhopal - 462 042, Madhya Pradesh
India
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/1357-6283.161836

  Abstract 

Background: Small group sessions are regarded as a more active and student-centered approach to learning. Item analysis provides objective evidence of whether such sessions improve comprehension and make the topic easier for students, in addition to assessing the relative benefit of the sessions to good versus poor performers. Self-assessment makes students aware of their deficiencies. Small group sessions can also help students develop the ability to self-assess. This study was carried out to assess the effect of small group sessions on item analysis and students' self-assessment. Methods: A total of 21 female and 29 male first year medical students participated in a small group session on topics covered by didactic lectures two weeks earlier. It was preceded and followed by two multiple choice question (MCQ) tests, in which students were asked to self-assess their likely score. The MCQs used were item analyzed in a previous group and were chosen of matching difficulty and discriminatory indices for the pre- and post-tests. Results: The small group session improved the marks of both genders equally, but female performance was better. The session made the items easier; increasing the difficulty index significantly but there was no significant alteration in the discriminatory index. There was overestimation in the self-assessment of both genders, but male overestimation was greater. The session improved the self-assessment of students in terms of expected marks and expectation of passing. Discussion: Small group session improved the ability of students to self-assess their knowledge and increased the difficulty index of items reflecting students' better performance.

Keywords: Discriminatory index, difficulty index, objective assessment, Small group learning, self-assessment


How to cite this article:
Biswas SS, Jain V, Agrawal V, Bindra M. Small group learning: Effect on item analysis and accuracy of self-assessment of medical students. Educ Health 2015;28:16-21

How to cite this URL:
Biswas SS, Jain V, Agrawal V, Bindra M. Small group learning: Effect on item analysis and accuracy of self-assessment of medical students. Educ Health [serial online] 2015 [cited 2020 Jul 5];28:16-21. Available from: http://www.educationforhealth.net/text.asp?2015/28/1/16/161836


  Background Top


There is growing dissatisfaction with the lecture-based conventional teaching of medical students in which teaching is passive, sometimes found boring and encourages reproduction rather than understanding of material. [1] Small group learning sessions have been introduced to make students more active in the learning process and more competent in their information seeking skills. [2] This was exemplified in the Problem-based learning curriculum (PBL) adopted at McMaster University, Canada, in 1969, that now enjoys wide popularity. [3] Students in small groups have interactive sessions, fun in learning, higher motivation, better activation of prior knowledge, better elaboration of knowledge, better understanding and therefore better recall of information when they apply it. [4] Interactive learning is often preferred by medical students and faculty alike. [5] However, small group learning in practice requires greater staffing and resources [6] and does not always produce the desired result. [7] Besides, different instructional methods are known to produce different learning outcomes [8] and a blended approach has been recommended for optimal result. [9]

Whatever may be the instructional mode, students' assessment of their own knowledge should be part of their assessment, along with their faculty's assessment of their knowledge. Self-assessment is a structured process within which the learner judges the activities he has just performed or the quality/quantity of his learning. [10] It is an appraisal of how accurate his judgment is of what he knows or does not know. It is an opportunity for the student to develop his/her ability to make judgments, which is the basis for making decisions. [10] An insight into one's deficiencies is required as otherwise people can be unaware of their areas of incompetence. Such people not only lack the ability to produce a correct response, but also lack the expertise necessary to realize that deficiency. [11] In such cases, their perception of performance does not correlate with their actual accomplishment. The ability to evaluate one's deficiencies accurately must therefore be an integral part of adult learning. [12] In prospective doctors, valid self-assessment is particularly vital for their professional competence. [13],[14] as they are dealing with human life. Developing self-assessment skills therefore becomes essential prerequisite to becoming a doctor. [15] However, assessing students within the medical curriculum is almost exclusively done by faculty, with no major role given to students' self-assessment. [16] Although some studies on self-assessment of medical students have been carried out in other parts of the world, [10],[16] the lack of such studies of Indian medical students reflects the lack of awareness in our country.

Item analysis of multiple choice questions (MCQs) is primarily to assess the quality of items, but it can also be used to assess the test taker's true ability. [17] Items having difficulty index between 30% and 70% and discriminatory index >0.25 are considered as having good difficulty and discrimination indices, respectively. [18] The difficulty index score is inversely proportional to how difficult the MCQs were for the students, reflected in students' test scores. A higher score indicates student found the MCQs easier and vice versa. A change in the difficulty index following a small group session would reflect the session's effect on student comprehension. The discriminatory index score is directly proportional to the gap between the good and poor performers. A change in the discriminatory index after an educational session indicates that the gap between the good and poor performing students was altered. Hence, MCQs picked with specific difficulty and discriminatory index provide a valid tool not just for the objective assessment of the cognitive processing of the students [19] but also for measuring the difference in their performance brought about by the small group session. [20] Vakani et al studied the change in the difficulty and discriminatory indices after small groups of general practitioners were asked to carry out case based task. Our study is the first attempt of using the change in the difficulty and discriminatory indices to assess students' performance after learning in small groups.

This study was carried out in first year medical students of our institute to evaluate the impact of small group learning on item analysis and student self-assessment. Item analysis was carried out to provide objective evidence whether small group sessions improved students' comprehension of the topic to make it easier for them, and to assess the relative benefit of the session for good performers versus poor performers. Self-assessment was carried out to provide an insight to whether the students were aware of their deficiencies and whether this session helped them to develop their ability to self-assess.


  Methods Top


This study was carried out in the Department of Biochemistry at LN Medical College, Bhopal, a private institute of central India. The institution opened in 2009 and is recognized now by the Medical Council of India for 150 MBBS students. Permission of the Institutional Ethical Committee and informed consent of the students were obtained prior to the study.

Fifty-one out of the 150 first year medical students in 2013-14 participated in this study, ages 17-20 years. One male student dropped out midway, reducing the study sample to 50. All 150 students were taught carbohydrate chemistry and metabolism by a series of 12 didactic lectures. Two weeks after the last didactic lecture, a small group session was held on the same topics. The session took an entire day and was therefore held on a weekend. The participating 51 students were divided into three small groups of 17 each, each with 10 male and 7 female students. However, Group A was reduced to 16 students after the one dropout. One faculty member in each group acted as a facilitator. The facilitators were all postgraduates in medical biochemistry who had attended the medical education training workshop mandated by the Medical Council of India. For equity reasons, similar sessions were held in groups of 50 on the next two weekends for the students who did not participate in the study.

The sample size was calculated based on a previous study. [20] Fifty-one students were selected on a first to consent basis from those who had attended all 12 lectures. These were then divided into three groups matching for age, gender and performance in the qualifying XII standard board exams.

The pre-test was carried out in the morning, marking the beginning of the session. It was comprised of 30 MCQs in 30 min on the two mentioned topics. Each correct answer was given one mark but there was no negative marking for incorrect answers. The maximum score possible was 30 and passing score was 15. Students were asked to assess their likely score at the end of the pre-test by providing the total number of questions they thought they got right. Anyone expecting to get half or more than half of the questions right was expecting to pass, as the percentage required for passing was 50%. After the pre-test, the students were given half hour to search for short and long answer questions on these two topics from previous five years university examination question papers made available from the college library. They were then allowed another hour to clarify unclear terms, formulating the learning objectives in each question, recalling prior knowledge on the topic from didactic lectures and brainstorm together to analyze possible answers. The session was chaired by a student from the group, while another jotted down the proceedings. This was followed by individual study for two hours from college library books and searching on-line literature. The students were allowed to take a lunch break before they returned to their groups. The post-lunch session was two hours for group discussion among the students with the faculty acting as a coordinator, ensuring participation of all students. This session was chaired and recorded by different set of two students. The session ended with a post-test similar to the pre-test, but on a different set of 30 MCQs on the same topics. The students were again asked to self-assess their likely score out of 30.

The MCQs of both the pre- and post-test were chosen from a question bank. This task was assigned to a faculty from the department of pathology to prevent any bias. The question bank was prepared by assigning items a difficulty and discriminatory index score based on their use on the previous year's student group. Sixty MCQs were picked, for which the difficulty index of 20 was ideal, for 20 more was acceptable and for 10 items the difficulty index showed them to be very easy and for another 10 items very difficult. Discriminatory index of 24 MCQs were excellent, 16 were good and another 20 were acceptable. From each category of discriminatory index, MCQs were then randomly distributed into two groups to select 30 MCQs each for the pre- and post-test. The discriminatory index score of MCQs selected for both pre- and post-test was evenly matched at 0.32, while the difficulty index score of pre-test MCQs was 48.57%, while that of post-test was 49.08%.

Difficulty index was calculated as the sum of the number of correct responses of high and low one-third performers divided by the total number of responses in the two groups, which was expressed as a percentage. Difficulty index of 50-60% was considered ideal, 30-49% and 61-70% acceptable, above 70% very easy and below 30% very difficult. [18] Discriminatory index was calculated as the difference between the number of correct responses of high and low one-third performers divided by the total number of responses in the two groups, which was multiplied by two. Discriminatory index above 0.40 was considered excellent, 0.30-0.40 was good, 0.25-0.30 was acceptable and below 0.25 was poor. [18]

The pre- and post-test MCQ answer sheets were evaluated objectively and self-assessment marks were also recorded. This was followed by an item analysis of the pre- and post-test MCQs. The data was evaluated statistically by the software IBM SPSS version 16, Chicago, USA.

The effect of small group session on student performance was evaluated by the paired t-test between pre- and post-objective assessment marks. Then, the effect of the group discussion on item analysis was seen as by the change in the difficulty and discriminatory index scores assessed by the paired t-test and by the change in the number of items in each category of the difficulty and discriminatory indices.

Student self-assessment was assessed by a paired t-test and correlation between objective assessment and self-assessment scores. The self-assessment bias (difference of self-assessment and objective assessment scores) reflected the overestimation or underestimation in students' self-assessment of their own performance. This was done for both, the pre-test as well as the post-test. For each, distribution of the number of students with no bias (accurate assessment), positive bias (over-assessment) and negative bias (under-assessment) was reported. All values calculated for bias were the absolute deviation, not the arithmetic deviation. The effect of small group session on the change in bias between the pre- and post-test was tested by a paired t-test. The distribution of students actually scoring the required 15 marks or more to pass and those expecting to do so by self-assessment was noted for both the pre- and post-test.


  Results Top


The mean age of group A was 18.50 years, group B 18.18 years and group C 18.59 years. The mean marks obtained in higher secondary school XII standard board examination of group A was 71.38%, group B was 67.82% and groups C was 71.06%, one way analysis of variance (ANOVA) showing no significant difference (F being 0.525 and significance 0.595). The three groups were also found to be evenly matched by their performance in the pre-test objective scores, one way ANOVA showing no significant difference (F being 0.119 and significance 0.888).

There was statistically significant improvement in the students' objectively assessed marks after the small group session [Table 1]. Out of a possible maximum of 30 marks, the students scored an average of 7.1 marks more in the post-test than in the pre-test. The improvement was similar for both the genders but females' mean marks were higher in both the pre- and post-test. All three groups showed comparable and statistically significant improvement.
Table 1: Effect of small group session on objective assessment marks

Click here to view


In item analysis, the small group session increased the difficulty index significantly from 39.8% in the pre-test to 61.4% in the post-test [Table 2]. The number of very easy items increased from three in the pre-test to eight in the post-test. Likewise, the number of very difficult items decreased from eleven in the pre-test to three in the post-test [Table 3]. There was also slight increase in the discriminatory index from 0.34 to 0.43 but it was not statistically significant. There was a slight increase of the number of excellent items from 16 to 21 and slight decline in the number of good items, but otherwise the distribution of discriminatory items remained almost the same [Table 3].
Table 2: Effect of small group session on difficulty and discriminatory index

Click here to view
Table 3: No. of items in difficulty and discriminatory index categories of pre- and post-test


Click here to view


There was statistically significant positive bias or overestimation in students' self-assessment, both in the pre-test and in the post-test [Table 4]. The overestimation was less for females, who showed significant correlation of self-assessment marks with objective assessment marks in both tests, with correlation being much higher in the post-test. The overestimation was more for males, who showed no significant correlation of self-assessment marks with objective assessment marks in the pre-test and significant but lower correlation than females in the post-test. Thus, the accuracy of the estimation was better in the post-test than the pre-test for both genders.
Table 4: Self-assessment bias (inaccuracy in self-assessing) in Pre- and Post-test

Click here to view


The small group session brought about a significant reduction in the positive bias or overestimation of test performance in both genders. Females' overestimation of their marks was reduced by an average of 2.6, whereas males' overestimation reduced by 4.8 marks [Table 5]. The number of students who over-estimated their performance declined from 42 in the pre-test to 31 in the post-test [Table 6]. The number of students who were expecting to score the required minimum of 15 marks to pass but by actual test scores did not do so declined from 27 in the pre-test to 1 in the post-test [Table 6].
Table 5: Reduction in self-assessment bias (inaccuracy of self-assessing) after small group session

Click here to view
Table 6: Accuracy of self-assessment in terms of marks and expectation to pass

Click here to view



  Discussion Top


Many studies have evaluated the methods of delivery in the teaching learning process. [21],[22],[23],[24] Our study specifically evaluated the small group learning session and its effect on item analysis and accuracy of self-assessment by students. Although we also found improvement in student performance after the small group session, this was expected. Similar findings have been reported by others including Sharan, who attributed improved knowledge with small group sessions to co-operative learning kinetics. [25] Swing et al. differed slightly; reporting that small group learning did not enhance the achievement of students of medium ability. [26]

MCQs have been used as the assessment tool in many studies. Saunders et al. reported that when MCQs were the assessment tool, problem-based learning fared worse than traditional curriculum. [27] Vakani et al. showed that task-based learning increases both the difficulty index and the discriminatory index of items, while lectures decrease the discriminatory index but do not have much effect on the difficulty index. [20] In our study, the session increased the difficulty index, reflecting that student comprehension improved and the MCQs appeared less difficult. This is also supported by the decline in the number of difficult MCQs measured and rise in the number of easy MCQs after the session. The increase in the discriminatory index was only slight and not statistically significant, showing that the session may have benefitted the good students more, although the gap between the good and poor performers was not significantly altered. Thus, item analysis can be used as an effective tool to evaluate the effect of the small group session.

In our study, students' assessment of their abilities exceeded their actual abilities (test mark totals). Other studies have similarly reported student self-ratings correlated poorly with actual performance. [12],[28] Fritzgerald et al. reported that self-assessment by medical students was stable for the first two years of the course curriculum but then declined in the third year, attributed to a shift from classroom-based to clinical task-based examinations. [29] Dunning et al. reported that students with poorer knowledge fared much worse in the accuracy of their self-assessments. [11] Tousignant et al. had findings similar to our study that the weak correlation of self-assessment with actual ability of the students improved in the post-test along with the improvement in their knowledge. [10] Our study also found that self-assessment improved after the small group session, with the magnitude of student over-estimation declining and the difference between the numbers expecting and actually passing being almost completely bridged. Rust et al. have emphasized that the understanding of self-assessment in turn improves student learning. [30] Thus, it can be said that self-assessment and learning are truly interdependent events. Expectedly, self-assessment has been positively perceived by students themselves to facilitate improvement in medical education. [31]

Culture is expected to have a greater influence on the performance and self-confidence of females compared with males. Differences between genders across cultures matter more for performance, while male overestimation is quite common worldwide. In our study, females were better than males in both their performance and self-assessment, possibly reflecting the two being inter-dependent. Other studies from India have likewise reported better female performance, reflecting their rising role in a society in transition. [32],[33] However, in developed countries, genders have been reported to be almost equivalent in performance, but there too male overestimation was greater compared with females. [34],[35] In some studies, females have been reported to underestimate themselves, being significantly less confident than their male counterparts. [36] De Saintonge et al. reported gender differences in the psychology of clinical medical students of UK trying to improve in an adverse learning environment. Women associate directly with reduced efficiency, while men suffer from the fear of negative evaluation. [37]

Our study had some limitations. The absence of a control group and the fact that we did not gather student feedback about self-assessment and their respective group facilitators restricted the implications of the study. Also, this study involves a relatively small group of first year students from a single institute in India, hence the application of the study for students elsewhere is uncertain.


  Conclusion Top


The small group session reduced students' over-estimation in their expected marks and allowed students to very accurately meet an expectation to pass with their actual performance. Item analysis revealed that the small group session increased the difficulty index score in the post-test, reflecting students' better comprehension. The slight but not significant increase in the discriminatory index in the post-test was a reflection of the session benefitting the good students more than the poor performers, although the gap between the two was not statistically significantly altered.

These findings suggest that small group sessions improve the students' ability to accurately self-assess their knowledge, but is not sufficient by itself to improve the poor performers. Further, females who performed better also were more accurate in their assessment of their own knowledge and test performance, reflecting interdependency of the performance and ability to assess one's performance. The fact that this held true for only females and not males may reflect something about Indian culture.

 
  References Top

1.
Kasselbaum DG. Change in medical education: The courage and will to be different. Acad Med 1989;64:446-7.  Back to cited text no. 1
    
2.
Rankin JA. Problem-based medical education: Effect on library use. Bull Med Libr Assoc 1992;80:36-43.  Back to cited text no. 2
    
3.
Neufeld VR, Woodward CA, MacLeod SM. The McMaster M.D. programme: A case study of renewal in medical education. Acad Med 1989;64:423-32.  Back to cited text no. 3
    
4.
De Grave WS, Schmidt HG, Boshuizen HP. Effects of problem-based discussion in studying a subsequent text: A randomized trial among first year medical students. Instr Sci 2001;29:33-44.  Back to cited text no. 4
    
5.
Doucet MD, Purdy RA, Kaufman DM, Langille DB. Comparison of problem-based learning and lecture format in continuing medical education on headache diagnosis and management. Med Educ 1998;32:590-600.  Back to cited text no. 5
    
6.
Wood DF. ABC of learning and teaching in medicine. BMJ 2003;326:328-30.  Back to cited text no. 6
[PUBMED]    
7.
Colliver JA. Effectiveness of problem-based learning curricula: Research and theory. Acad Med 2000;75:259-66.  Back to cited text no. 7
    
8.
Mayer RE, Greeno JG. Structural differences between learning outcomes produced by different instructional methods. J Educ Psychol1972; 63:165-73.  Back to cited text no. 8
    
9.
Shaffer K, Small JE. Blended learning in medical education: Use of an integrated approach with web-based small group modules and didactic instruction for teaching radiologic anatomy. Acad Radiol 2004;11:1059-70.  Back to cited text no. 9
    
10.
Tousignant M, DesMarchais JE. Accuracy of Student Self-Assessment Ability Compared to Their Own Performance in a Problem-Based Learning Medical Program: A Correlation Study. Adv Health Sci Educ 2002;7:19-27.  Back to cited text no. 10
    
11.
Dunning D, Johnson K, Ehrlinger J, Kruger J. Why People Fail to Recognize Their Own Incompetence. Curr Dir Psychol Sci 2003;12:83-7.  Back to cited text no. 11
    
12.
Eva KW, Cunnington JP, Reiter HI, Keane DR, Norman GR. How can I know what I don't know? Poor self-assessment in a well defined domain. Adv Health Sci Educ Theory Pract 2004;9:211-24.  Back to cited text no. 12
    
13.
Gordon, MJ. A review of the validity and accuracy of self-assessments in health professions training. Acad Med 1991;66:762-9.  Back to cited text no. 13
    
14.
Shepherd D, Hammond P. Self-assessment of specific interpersonal skills of medical undergraduates using immediate feedback through closed-circuit television. Med Educ 1984;18:80-4.  Back to cited text no. 14
[PUBMED]    
15.
Das M, Mpofu D, Dunn E, Lanphear JH. Self and tutor evaluations in problem-based learning tutorials: Is there a relationship? Med Educ 1998;32:411-8.  Back to cited text no. 15
    
16.
Arnold L, Willoughby TL, Calkins EV. Self-evaluation in undergraduate medical education: A longitudinal perspective. J Med Educ 1985;60:21-8.  Back to cited text no. 16
[PUBMED]    
17.
Abdalla ME. What does Item Analysis Tell Us? Factors affecting the reliability of Multiple Choice Questions. Gezira J Health Sci 2011;7:17-25.  Back to cited text no. 17
    
18.
Hingorjo MR, Jaleel F. Analysis of one-best MCQs: The difficulty index, discrimination index and distractor efficiency. J Pak Med Assoc 2012;62:142-7.  Back to cited text no. 18
    
19.
Clifton SL, Schriner CL. Assessing the quality of multiple-choice test items. Nurse Educ. 2010;35:12-6.  Back to cited text no. 19
    
20.
Vakani F, Jafri W, Ahmad A, Sonawalla A, Sheerani M. Task-Based Learning versus Problem-Oriented Lecture in Neurology Continuing Medical Education. J Coll Physicians Surg Pak 2014;24:23-6.  Back to cited text no. 20
    
21.
Mattsson GE. From teaching to learning: Experiences of small CME group work in general practice in Sweden. Scand J Prim Health Care 1999;17:196-200.  Back to cited text no. 21
    
22.
Vakani FD, Sheerani MD, Jafri SM. Continuing Medical Education: The DCPE perspective. J Coll Physicians Surg Pak 2010;20:839-40.  Back to cited text no. 22
    
23.
Taylor D, Miflin B. Problem-based learning: Where are we now? Med Teach 2008;30:742-63.  Back to cited text no. 23
    
24.
Davis MH. AMEE medical education guide No. 15. Problem based learning: A practical guide. Medical Teacher 1999;21:130-40.  Back to cited text no. 24
    
25.
Sharan S. Cooperative Learning in Small Groups: Recent Methods and Effects on Achievement, Attitudes, and Ethnic Relations. Rev Educ Res 1980;50:241-71.  Back to cited text no. 25
    
26.
Swing SR, Peterson PL. The Relationship of Student Ability and Small-Group Interaction to Student Achievement. Am Educ Res J 1982;19:259-74 .  Back to cited text no. 26
    
27.
Saunders N, McIntosh J, McPherson J, Engle CA. A comparison between University of Newcastle and University of Sydney Final Year Students: Knowledge and Competence. In: Nooman ZM, Schmidt HG, Ezzat ES, editors. Innovation in Medical Education: An Evaluation of its Present Status. New York: Springer Publishing Company; 1990. p. 50-4.  Back to cited text no. 27
    
28.
Stefanie LA. Peer, self and tutor assessment: Relative reliabilities. Stud High Educ 1994;19:69-75.  Back to cited text no. 28
    
29.
Fitzgerald JT, White CB, Gruppen LD. A longitudinal study of self-assessment accuracy. Med Educ 2003;37:645-9.  Back to cited text no. 29
    
30.
Rust C, Price M, O'Donovan B. Improving Students' Learning by Developing their Understanding of Assessment Criteria and Processes. Assess Eval High Educ 2003;28:147-64.  Back to cited text no. 30
    
31.
Schiekirka S, Reinhardt D, Heim S, Fabry G, Pukrop T, Anders S, et al. Student perceptions of evaluation in undergraduate medical education: A qualitative study from one medical school. BMC Medical Education 2012;12:45.  Back to cited text no. 31
    
32.
Mandal A, Ghosh A, Sengupta G, Bera T, Das N, Mukherjee S. Factors affecting the performance of undergraduate medical students: A perspective. Indian J Community Med 2012;37:126-9.  Back to cited text no. 32
[PUBMED]  Medknow Journal  
33.
Biswas SS, Jain V. Factors affecting performance of first year medical students in Bhopal, India. J Contemp Med Educ 2013;1:192-7.  Back to cited text no. 33
    
34.
Minter RM, Gruppen LD, Napolitano KS, Gauger PG. Gender differences in the self-assessment of surgical residents. Am J Surg 2005;189:647-50.  Back to cited text no. 34
    
35.
Mattheos N, Nattestad A, Falk-Nilsson E, Attström R. The interactive examination: Assessing students' self-assessment ability. Med Educ 2004;38:378-9.  Back to cited text no. 35
    
36.
Blanch DC, Hall JA, Roter DL, Frankel RM. Medical student gender and issues of confidence. Patient Educ Couns 2008;72:374-81.  Back to cited text no. 36
    
37.
De Saintonge DM, Dunn DM. Gender and achievement in clinical medical students: A path analysis. Med Educ 2001;35:1024-33.  Back to cited text no. 37
    



 
 
    Tables

  [Table 1], [Table 2], [Table 3], [Table 4], [Table 5], [Table 6]



 

Top
 
 
  Search
 
Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

 
  In this article
Abstract
Background
Methods
Results
Discussion
Conclusion
References
Article Tables

 Article Access Statistics
    Viewed3076    
    Printed52    
    Emailed0    
    PDF Downloaded529    
    Comments [Add]    

Recommend this journal