|Year : 2008 | Volume
| Issue : 1 | Page : 201
D Pathman, M Glasser
Co Editors, Education for Health
|Date of Submission||10-Apr-2008|
|Date of Web Publication||15-Apr-2008|
Source of Support: None, Conflict of Interest: None
|How to cite this article:|
Pathman D, Glasser M. Co-Editors' Notes. Educ Health 2008;21:201
Papers in this issue of Education for Health were contributed by authors under no particular call-for-papers or special theme issue. Nevertheless, important themes link these papers.
Four papers in this issue focus on fostering quality in the evaluation of health professions education. Musal and colleagues at Turkey’s Dokuz Eylul University take the broadest look at evaluation quality in their paper’s description of an ambitious, institution-wide redesign of their medical school’s evaluation approaches. Their approach draws upon the seminal work of Kirkpatrick (2005) which places all evaluation activities within one of four levels based on the outcomes assessed, specifically assessment of (1) learner and faculty reactions to the curriculum, (2) knowledge and attitude changes in learners, (3) behavior changes in learners, and (4) institution and broader society changes. Their paper reports outcomes of their school’s new curriculum within the first level (students’ opinions about and contentment with the curriculum), second level (students’ performance and perceived professional competencies), and fourth level (schools’ prompted revisions in the curriculum and examination approaches).
In a second paper, Harlak and colleagues further the quality of evaluation of students’ reactions to curriculum in communication skills (Kirkpatrick level 1) by assessing aspects of the validity and internal consistency of a Turkish translation of a student assessment tool developed and previously tested in English. In a third paper, Thomas and Hoon assess medical students’ perception of the fairness, difficulty and clarity of their school’s methods for evaluating what students have learned (again, Kirkpatrick level 1 outcomes). And lastly, in a fourth paper in this evaluation theme, van Dalen (an Associate Editor of this journal) points out in a personal view article that teaching methods typically used in skills training for health professions students generally rely on older educational approaches wherein instructors model the skills and then students are observed as they practice them. Van Dalen points out that these approaches do not follow the more current, constructivistic educational principles in health education, like coaching (rather than lecturing) and connecting the new information to what students already know. Noting that there is no evidence to suggest that one or another way is best in teaching skills to health professions students, van Dalen advocates for evaluative research.
A second theme found in three papers is the challenge of integrating the disciplines of medicine and public health. Pappas et al., present a 25-year retrospective case study of Aga Khan University, noting its inability to meet its original mandate to link public health and medical education. Charles Bolen, in his remarkable interview by Westberg, describes how, in his career working in various leadership roles in developing countries for the WHO, his efforts were repeatedly challenged by forces working to keep medicine and public health separate, including resistance to change and territorialism of disciplines. As one means to enhance the public health skills of physician-trainees, Carney and Hackett describe the University of Vermont’s (USA) new required course in which students work with community agencies in carrying out community health interventions devised by the agencies, not the students. Interestingly, by taking a “community-first” approach to identify projects, they report evaluation data that only documented community outcomes (perceived benefits to the community, anticipated sustainability of programs) which, within Kirkpatrick’s schema, are level 4 outcomes. In a typical learner-driven curriculum, evaluators most often measure student outcomes, corresponding to Kirkpatrick levels 1, 2 and 3. It makes sense that a “community first” education curriculum would elevate the importance of outcomes for communities in focusing its evaluation.
Three papers evaluate existing educational programs or assess students’ views to help in the planning of new training programs. Walsh assesses learners’ knowledge gains and views about an online infectious disease training program for practicing physicians (Kirkpatrick level 1 and 2 outcomes). Al-Adawi and colleagues assess medical students’ views of the field of psychiatry to understand how the medical education process might help meet the worldwide need for more mental health practitioners. Ozcakir and colleagues assess the views of first-year medical students and find, among many things, that students generally avoid talking with dying patients and believe that the dying should not be told of their impending mortality. There are obvious educational needs that follow from these findings. A final paper, by Moukhyer et al., examines the health habits of Sudanese teens, which lead to recommendations for the education of youth and their parents and for community and national initiatives.
When themes link the papers of a journal issue, the papers provide more to the reader through synergy than if the same papers were published individually. This is one reason why journals have their unique foci. It is also a reason why many electronic journals, including Education for Health, continue to publish papers bundled into issues even though it is possible with electronic publication to put out papers individually.
Donald Pathman, M.D., M.P.H.
Michael Glasser, Ph.D.
Co-Editors, Education for Health
Kirkpatrick, D.L. & Kirkpatrick J.D. (2005). Evaluating training programs: The four levels (3rd Ed). San Francisco, CA: Berrett-Koehler.