Education for Health

LETTER TO THE EDITOR
Year
: 2018  |  Volume : 31  |  Issue : 3  |  Page : 189--190

Falling prey to an impact factor craze


Deepak Juyal1, Benu Dhawan2, Vijay Thawani3, Shweta Thaledi4,  
1 Department of Microbiology, Government Doon Medical College, Dehrakhas, Patelnagar, Dehradun, Uttarakhand, India
2 Department of Microbiology, All India Institute of Medical Sciences, New Delhi, India
3 Department of Pharmacology, People's College of Medical Sciences and Research Centre, Bhapur, Bhopal, Madhya Pradesh, India
4 Department of Microbiology, Sridev Suman Subharti Medical College, Subhartipuram, Prem Nagar, Dehradun, Uttarakhand, India

Correspondence Address:
Deepak Juyal
Department of Microbiology, Government Doon Medical College, Dehrakhas, Patelnagar, Dehradun - 248 001, Uttarakhand
India




How to cite this article:
Juyal D, Dhawan B, Thawani V, Thaledi S. Falling prey to an impact factor craze.Educ Health 2018;31:189-190


How to cite this URL:
Juyal D, Dhawan B, Thawani V, Thaledi S. Falling prey to an impact factor craze. Educ Health [serial online] 2018 [cited 2021 Oct 18 ];31:189-190
Available from: https://www.educationforhealth.net/text.asp?2018/31/3/189/258922


Full Text



Dear Editors,

The journal impact factor (JIF) has become the indicator of the quality of research publication and author's scientific achievement, raising serious concerns about the use of the JIF as a surrogate marker for the quality of research, articles, or the researcher.

Research scientists are often ranked on the basis of their publication in journals with a high-impact factor (IF).[1] This has led to IF-based assessment for the appointment, promotion, and allocation of research grants.[2] Thus, it has become an imperative for scientists to publish their work in journals with a high IF, and they are more concerned about “where they publish rather than what they publish.”[3] The continuous pressure for publication in a high IF journal leads to performance anxiety among researchers; and they may indulge in unethical practices such as data falsification and fabrication. Taking advantage of an IF “craze,” many have started allocating fake IFs to the “predatory” journals,[4] the sole aim of these dubious journals being to earn from publication fees or article processing charges.

As there are no principles governing the interpretation of the JIF, there can be a great degree of mutation and manipulation in its evaluation. In calculating the JIF, only original papers and review articles are counted in the denominator while the entire published materials, including editorials, letters to the editor, news, book reviews, original papers, and review articles, are accepted in the numerator. This significantly increases the JIF. The JIF can be skewed by self-citation, negative citation, publication of more reviews, or by editorial dictum, for example, editors asking for citations from their own journal.[5]

The abuse of the JIF as a metric of an individual scientist's or article's importance has been decried by the San Francisco Declaration of Research Assessment (DORA).[6] The aim of DORA is to put an end to the practice of using the JIF as a valuation metric. A comprehensive scientific evaluation of an article requires a multidimensional approach and is beyond the scope of a single metric such as the IF. Although there is a diverse range of parameters such as the h-index, Y-factor, eigenfactor, and altmetric widget that can be used as evaluation metrics, there is no “one size fits all” set of metrics that can assess the credibility of researchers or their publications. In evaluating the performance of a researcher, administrators should focus on contribution and content rather than on publication venue. In addition, one must note that the traditional method of evaluation continues to be peer review.

In spite of widespread recognition that the IF is misused, this misuse continues because of forces within the scientific community that encourage, promote, and perpetuate it. We submit that the JIF is not an appropriate metric to measure the scientific content of individual articles or a scientist's credibility and exerts an increasingly detrimental influence on the scientific enterprise. Using the JIF for surrogate scientific valuation will not only affect the research scientists involved but also may induce an unhealthy research culture and hamper overall scientific progress. Administrators should be aware that IF is an inadequate measure of individual achievement.

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.

References

1Alberts B. Impact factor distortions. Science 2013;340:787.
2Stephan P. Research efficiency: Perverse incentives. Nature 2012;484:29-31.
3Oh HC, Lim JF. Is the journal impact factor a valid indicator of scientific value? Singapore Med J 2009;50:749-51.
4Beall J. Misleading Metrics. Scholarly Open Access. Available from: https://www.scholarlyoa.com/other-pages/misleading-metrics/. [Last accessed on 2016 May 17].
5Feetham L. Can you measure the impact of your research? Vet Rec 2015;176:542-3.
6DORA: San Francisco Declaration on Research Assessment. American Society for Cell Biology; 2013. Available from: http://www.am.ascb.org/dora/. [Last accessed on 2016 May 17].