The first is a general one about the paper: when proposing a new measure of scientific output, it would seem appropriate to actually compare it quantitatively against the alternatives. The author "argues" (his term) that his new measure is preferable to 4 other common alternatives, but provides no hard evidence that it actually is better (eg in discriminating Nobel winners from professorial/tenured/non-tenured staff). Indeed, the evidence presented seems to rather undermine the measure - some of the Nobel winners have very mediocre scores, well below the author's threshold for a "successful scientist". But of course, I don't know how poorly the alternatives perform...
The second is a more detailed one concerning the measure itself. I'm sure that my opinion is completely biased, because the measure seems particularly ungenerous to me:-) The problem is that no effort has made to account for the individual contributionthat a researcher makes to each paper. As it stands, the measure is very generous to people who work in large groups who share wide co-authorship, and very stingy to those who largely work alone. I was surprised to note that none of the 4 pre-existing criteria, which the author provides for comparison, account for this factor either. In the UK (at least where I used to work, in a NERC laboratory) it was standard practice to allocate a proportion of a paper to each contributor - eg a 2-author paper might be shared 60%-40%, or 80%-20%, depending on the magnitude and importance of the relative contributions. Performance assessments can then be based on "paper equivalents", with one single-author paper being equivalent to say 2x40% + 1x20% contributions on 3 separate papers. NERC promotion guidelines also explicitly referred to single-author and/or first-author papers as being particularly valuable. According to NERC's guidelines, I was reasonably competent. By the h index, I suck. I therefore conclude that the h index is faulty :-)
I suggest the following simple adaptation: count "paper equivalents" rather than total papers. h would then be the largest integer such that the total contribution of the researcher to papers with h citations, is at least h. Where this percentage contribution information is not readily available, a sensible decreasing function of percentage authorship with position in author list could no doubt be devised - perhaps an exponential, or the nth person being awarded a score in proportion to 1/n (2 authors get 67%:33%; 3 authors get 55%:27%:18%).
2 comments:
Citation count is in itself of dubious value. You can, for exempl, write a fairly trivial article describing a new mathematical trick to solve a common problem and get lots of citations. Even worse, you can come up with a crackpot theory and get cited lots of times by people who just want to mention there is an alternative explanation and that it is wrong.
Sure one paper can skew things, but a substantial number of highly-cited articles is probably as good a measure as a single number can hope to provide.
Overall, I think the h index seems like a pretty good single-number measure, with the proviso that the number of authors (proportion of authorship) must be taken into account (and via email, the author seems to agree with me on that point). I do have some doubts about the widespread use of metrics such as this and would not like it to be used as an absolute criterion, on the other hand the "does his face fit and is he one of us" approach of subjective peer-review has obvious drawbacks too.
Post a Comment