Science for Progress
9: The Journal Impact Factor: how (not) to evaluate researchers – with Björn Brembs
What is the Journal Impact Factor?
The Journal Impact Factor is widely used as a tool to evaluate studies, and researchers. It supposedly measures the quality of a journal by scoring how many citations an average article in this journal achieves. Committees making hiring and funding decisions use the 'JIF' as an approximation for the quality of the work a researcher has published, and in extension as an approximation for the capabilities of an applicant.
JIF as a measure of researcher merit
I find this practice already highly questionable. First of all, it appears the formula calculates a statistical mean. However, no article can receive less than 0 citations, while there is no upper limit to citations. Most articles - across all journal - receive only very few citations, and only a few may receive a lot of citations. This means we have a 'skewed distribution' when we plot how many papers received how many citations. The statistical mean, however, is not applicable for skewed distributions. Moreover, basic statistics and probability tell us that if you blindly choose one paper from a journal, it is impossible to predict -or even roughly estimate - its quality by the average citation rate, alone. It is further impossible to know the author's actual contribution to said paper. Thus, we are already stacking three statistical fallacies by applying JIF to evaluate researchers.
But this is just the beginning! Journals don't have an interest in the Journal Impact Factor as a tool for science evaluation. Their interest is in the advertising effect of the JIF. As we learn from our guest, Dr. Björn Brembs (professor for neurogenetics at University of Regensburg), journals negotiate with the private company Clarivate Analytics (in the past it was Reuters) that provides the numbers. Especially larger publishers have a lot of room to influence the numbers above and below the division line in their favor.
Reputation is not quality.
There is one thing the Journal Impact Factor can tell us: how good the reputation of the journal is among researchers. But does that really mean anything substantial? Björn Brembs reviewed a large body of studies that compared different measures of scientific rigor with the impact factor of journals. He finds that in most research fields the impact factor doesn't tell you anything about the quality of the work. In some fields it may even be a predictor of unreliable science! This reflects the tendency of high ranking journals to prefer novelty over quality.
How does this affect science and academia?
The JIF is omnipresent. A CV (the academic resume) is not only judged by the name of the journals in a publication list. Another factor is the funding a researcher has been able to get. However, funding committees may also use JIF to evaluate whether an applicant is worthy of funding. Another point on a CV is the reputation of the advisers, who were also evaluated by their publications and funding. Another important point on a CV is the reputation of the institute one worked at, which is to some degree evaluated by the publications and the funding of their principle investigators.
It is easy to see how this puts a lot of power into the hands of the editors of high ranking journals. Björn Brembs is concerned about the probable effect this has on the quality of science overall. If the ability to woe editors and write persuasive stories leads to more success than rigorous science, researchers will behave accordingly. And they will also teach their students to put even more emphasis on their editor persuasion skills. Of course not all committees use JIF to determine who gets an interview. But still the best strategy for early career researchers is to put all their efforts into pushing their work into high ranking journals.
What now?!
We also talk about possible solutions to the problem.