|LETTER TO THE EDITOR
|Year : 2018 | Volume
| Issue : 3 | Page : 189-190
Falling prey to an impact factor craze
Deepak Juyal1, Benu Dhawan2, Vijay Thawani3, Shweta Thaledi4
1 Department of Microbiology, Government Doon Medical College, Dehrakhas, Patelnagar, Dehradun, Uttarakhand, India
2 Department of Microbiology, All India Institute of Medical Sciences, New Delhi, India
3 Department of Pharmacology, People's College of Medical Sciences and Research Centre, Bhapur, Bhopal, Madhya Pradesh, India
4 Department of Microbiology, Sridev Suman Subharti Medical College, Subhartipuram, Prem Nagar, Dehradun, Uttarakhand, India
|Date of Web Publication||23-May-2019|
Department of Microbiology, Government Doon Medical College, Dehrakhas, Patelnagar, Dehradun - 248 001, Uttarakhand
Source of Support: None, Conflict of Interest: None
|How to cite this article:|
Juyal D, Dhawan B, Thawani V, Thaledi S. Falling prey to an impact factor craze. Educ Health 2018;31:189-90
The journal impact factor (JIF) has become the indicator of the quality of research publication and author's scientific achievement, raising serious concerns about the use of the JIF as a surrogate marker for the quality of research, articles, or the researcher.
Research scientists are often ranked on the basis of their publication in journals with a high-impact factor (IF). This has led to IF-based assessment for the appointment, promotion, and allocation of research grants. Thus, it has become an imperative for scientists to publish their work in journals with a high IF, and they are more concerned about “where they publish rather than what they publish.” The continuous pressure for publication in a high IF journal leads to performance anxiety among researchers; and they may indulge in unethical practices such as data falsification and fabrication. Taking advantage of an IF “craze,” many have started allocating fake IFs to the “predatory” journals, the sole aim of these dubious journals being to earn from publication fees or article processing charges.
As there are no principles governing the interpretation of the JIF, there can be a great degree of mutation and manipulation in its evaluation. In calculating the JIF, only original papers and review articles are counted in the denominator while the entire published materials, including editorials, letters to the editor, news, book reviews, original papers, and review articles, are accepted in the numerator. This significantly increases the JIF. The JIF can be skewed by self-citation, negative citation, publication of more reviews, or by editorial dictum, for example, editors asking for citations from their own journal.
The abuse of the JIF as a metric of an individual scientist's or article's importance has been decried by the San Francisco Declaration of Research Assessment (DORA). The aim of DORA is to put an end to the practice of using the JIF as a valuation metric. A comprehensive scientific evaluation of an article requires a multidimensional approach and is beyond the scope of a single metric such as the IF. Although there is a diverse range of parameters such as the h-index, Y-factor, eigenfactor, and altmetric widget that can be used as evaluation metrics, there is no “one size fits all” set of metrics that can assess the credibility of researchers or their publications. In evaluating the performance of a researcher, administrators should focus on contribution and content rather than on publication venue. In addition, one must note that the traditional method of evaluation continues to be peer review.
In spite of widespread recognition that the IF is misused, this misuse continues because of forces within the scientific community that encourage, promote, and perpetuate it. We submit that the JIF is not an appropriate metric to measure the scientific content of individual articles or a scientist's credibility and exerts an increasingly detrimental influence on the scientific enterprise. Using the JIF for surrogate scientific valuation will not only affect the research scientists involved but also may induce an unhealthy research culture and hamper overall scientific progress. Administrators should be aware that IF is an inadequate measure of individual achievement.
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
| References|| |
Alberts B. Impact factor distortions. Science 2013;340:787.
Stephan P. Research efficiency: Perverse incentives. Nature 2012;484:29-31.
Oh HC, Lim JF. Is the journal impact factor a valid indicator of scientific value? Singapore Med J 2009;50:749-51.
Feetham L. Can you measure the impact of your research? Vet Rec 2015;176:542-3.
DORA: San Francisco Declaration on Research Assessment. American Society for Cell Biology; 2013. Available from: http://www.am.ascb.org/dora/
. [Last accessed on 2016 May 17].