Normally, an article that cites Kim Kardashian's Wikipedia entry as a reference wouldn’t make it into a scientific journal. But last week, the journal Genome Biology published a commentary by genome scientist Neil Hall that did just that.
The paper, meant to be satirical, was titled “The Kardashian index: a measure of discrepant social media profile for scientists,” and it proposed a way of determining whether scientists on social media had more influence than their scientific renown would warrant. It proposed a measure called the K-index, which would compare a scientist's number of citations to his or her number of Twitter followers. Scientists who had more followers than citations would have a high K-index.
From the paper:
I propose that all scientists calculate their own K-index on an annual basis and include it in their Twitter profile. Not only does this help others decide how much weight they should give to someone’s 140 character wisdom, it can also be an incentive - if your K-index gets above 5, then it’s time to get off Twitter and write those papers.
There's a thorough and interesting conversation out there about how scientists are or should be using social media. For many scientists on social media, the K-index paper was not a welcome contribution. The paper touched several nerves, inspiring satirical pieces and even spawning a hashtag, #AlternateScienceMetrics. Critics were quick to point out that comparing scientists who use social media to Kim Kardashian was, in fact, kind of an insult to scientists who use social media.
Molecular biologist Buddhini Samarasinghe writes in a post:
This 'joke' article is only funny if you are a senior tenured professor with lots of papers and yet have a low follower count on social media. "Ha ha, let's laugh at those silly scientists doing social media outreach when they should be writing papers!" The K-index trivialises those of us who work hard to communicate science with the public.
Anthropologist Kate Clancy made a similar point, noting that the joke, which skewered people with less power in the scientific community, just wasn’t funny. And Mick Watson pointed out that “number of citations is not a measure of quality.”
But the win for a point-by-point rebuttal of the article, which is dry enough in tone that it’s been taken seriously, goes to Red Ink, which has produced a brilliantly scathing annotation of the paper.