Phil Bourne, the dean of the School of Data Science here at UVA, has a concise and compelling new blog post up this week about what he calls The Curse of the h-index. In it, Dean Bourne lays out some vivid examples from his own work to show how far the h-index falls short of actually representing anything like an individual’s scholarly merit.

Bourne’s own CV is evidence of some of the most common recurring ways that metrics like the h-index can go awry, from citation creep (the blockbuster database that keeps getting cited back to Bourne, even as the database itself has long ago been passed along to others) to undiscovered gems (the work Bourne knows was most challenging, but has yet to be appreciated by others in the field) to widely-read-but-rarely-cited work to the ever-present many-authored-work-with-unclear-contributions. Some are over-valued, some under-, some have their value misattributed, and some have value on a dimension not measurable by citation. Add it all up, and the h-index starts to look like kind of a mess.

The curse, of course, is that busy administrators and scholars (including Bourne himself, as he acknowledges!) simply cannot resist the temptation of considering these numbers as part of their process of evaluating the many scholars whose CVs cross their desk in a dynamic, competitive market for scholarly labor. Bourne calls on the data science community to help develop better scholarly metrics, but acknowledges the other key ingredient is a cultural shift in academia that would reflect the irreducibly qualitative and humanistic nature of evaluating peers - a shift that may be very difficult, but can’t come soon enough.

Read the full blog post.