Are we emphasizing the benefits of citations all wrong? The case for networked discovery over impact measurement.

Mita Williams published a blog post this week entitled Bret Victor, Bruno Latour, the citations that bring them together, and the networks that keep them apart. It’s an interesting piece that critiques the use of citations as a performance measure and gives some alternative ways to look at citations. It’s well worth the read.

I’ve been working a lot with citation analysis lately and I’ve been having some conceptual problems with it that Mita’s piece helped me think more carefully about. Mita talks about Bruno Latour’s theory of citations which, as i understand it, argues that citations are used to support claims an author is making and there are many ways for an author to use a document to do this. References to other documents do not necessarily give positive assessments of a cited document or indicate that it’s special or unique. There are multiple reasons why an author might cite a work. For example, to argue against it, to point to it as diving into an interesting area of study, or just to say “somebody else once looked at this and this is what they found”. Why something is cited is situational and depends on the context of how and when it appears in the author’s paper. A citation of a document is rarely a claim that this document is important.

Now this idea doesn’t sound particularly novel, or even that interesting, in my rephrasing. What really kinda threw me off is – despite having already read the DORA declaration, the Leiden Manifesto, and the Metric Tide where similar points about the limits of citation metrics are made – whenever I had compiled citation metrics on an article, journal, or researcher, I never really understood what I was compiling. I had a number, but what did this number mean? What did it show?

Lately there seems to have been active movement away from using journal level citation metrics (e.g. the journal impact factor) thanks especially to the DORA declaration. However, there’s still a lot of unclarity about using article level citation metrics. Evaluation committees are always looking for easier ways to evaluate a researcher’s or institution’s publication output. It’s a relatively easy move from evaluating the research based on the overall journal citation metrics of the journals they were published in (since DORA recommends against it), to evaluating based on the overall article citation level metrics.

Consider the research profile service, Pure, that Mita links to at the end of her blog post with the example of Northwestern University. Similar to Google Scholar profiles, the total article level citation counts are front and center. Clearly meant as way to assess a researcher’s performance and impact.

GS profile

Example of a Google Scholar Profile

Again I am stuck with, “What do these numbers mean?” It’s well established that citation practices vary across disciplines, that citation counts increase with the length of an author’s career or number collaborations, that author’s constantly feel pressured to engaged in coercive citation practices, and that there’s a lot of questionable strategies authors can use to increase their citation counts (e.g. This guide from the publisher Sage).

It’s very tempting here to adopt the view that citation metrics are ultimately meaningless for evaluation. DORA, Leiden, and the Metric Tide don’t go this far, but their “you can still use them for evaluation but just be extremely extremely careful” doesn’t inspire much confidence that you can to use metrics without misusing them.

Elizabeth Gadd has a good response to this feeling, arguing that we need to be careful about jumping on the “bad metrics bandwagon”. That the real problem here just is that the value of research is extremely hard to evaluate. Yes, metrics can be faulty and misused, but a peer review approach to evaluation has the similar problems. We are all well aware about how peer reviewers can miss important mistakes and also give more prestige to those in positions of power. The real problem, as Gadd says, is we are trying to “measure the immeasurable”.

Consider for instance the sudden incredible importance of the publications of those researching Ebola before the 2013 outbreak. Imagine how much worse the outbreak could have been if all of those researchers had been de-funded or moved to new research areas because their citation metrics were low. (Also imagine how the outbreak could have been lessened if all of that research was open access and discoverable).

Taking Gadd’s points. I’m going to do some more critical engagement about how I present citation metrics and how they are used. However, I also want to suggest maybe we should start to move away from using citation analysis for performative measures to instead make use of citation analysis in a way that fits better with the established practices of citation. To use citation analysis as the author’s who put the citations there in the first place use them, as a way to link supporting claims.

I think one of the most underrated features of Google Scholar is how easy it makes “citation hopping”. That is, how quickly you can find articles that cite other articles. Often when we give instruction on literature searching to students, we only focus on searching using keywords and subject headings. I’ve worked with so many students who, being new to a research area, are unaware of the keywords and subject headings. Even if they have a good grasp on them, there’s still a lot of areas keyword and subject heading searching can go wrong. There are relevant articles out there that just don’t use keywords that line-up with the ones they are trying. Plus there’s always a considerable amount of scholarly literature out there that has yet to be indexed using their specific subject heading scheme.

What if I told you somebody had already done a lot of the work connecting relevant publications together? This is what Google Scholar’s citation hopping allows. You find a relevant article and want to find more like it? Google Scholar makes that easy by clicking the “Cited by” option.

GS result.png

You then can see a list of articles that cite that article and also use keyword and date searching to pull out more relevant articles.

GS citing articles.png

And you can keep doing this. Keep hopping around and finding other relevant articles and seeing who cites them. Keep using short keyword searches and date limits to pull better results to the top. Move around a network of linked articles on the topic you are searching.

Students always get very excited when I show them this capability. It’s a lot quicker process, with a faster payoff, than coming up with creative search strategies and plugging them into a database or discovery layer. There’s also something to how it makes you feel as you do it. You can see this network of research linking unfolding in front of you. Helps you understand how research is a collaborative process, that we are all building off each other and collaborating to find answers. Theres something more beautiful and inspiring about it, and it seems to help students dive deeper into the literature than just using top results returned for each search.

Of course, there are faults. Like any search strategy it is an art rather than a science. This way of searching is maybe even more art-like because the process you used to find an certain article is much harder to record or document. I’m not suggesting here that it replace keyword and subject heading searches, but rather that it’s just another search method that can be used that we should be promoting more.

When you do this citation hopping, you are relying on authors to correctly identify and cite relevant papers. As already discussed, there are problems with this. Publications are going to be missed or cited when they shouldn’t be, but as your network expands through this citation hopping, you increase your chances of finding that relevant paper you need.

I’ve started doing a lot of citation hopping when I’m researching topics on library science. Lately I notice that when I come across a highly cited article on my topic in Google Scholar I have stopped thinking: “Wow. This article must have some really useful content in it!” and instead have started thinking: “Wow. This article has a lot of citations and I can use these to find even more relevant articles!”

One of the best parts of this citation hopping is also that it’s an incredible tool for finding new literature on a topic or hidden grey literature. I cannot tell you how many amazing slide presentations or recently published preprints/articles I have come across. Google Scholar definitely has a leg up on other potential citation hopping tools like Scopus or Web of Science when it comes to these.

So to wrap it all up here, I’m still a bit skeptical of how we can use citation metrics to evaluate performance or impact. I haven’t completely given up on using citation analysis yet though. I think it’s measuring something that actively resists measurement and so even qualitative attempts to measure research are going to have big problems. The way forward here is probably to combine a range of qualitative and quantitative measures and be watchful about how these measures are used. I do strongly believe though that we are missing out on the real benefit that citation analysis provides. The opportunity for networked discovery and making searching scholarly literature easier.

Advertisements

About Ryan Regier

Doing Library Stuff. Follow me on twitter at: @ryregier
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s