The problem with measuring Twitter

Share this on social media:

Topic tags: 

With social media finding its way into every aspect of our lives, it is becoming increasingly interesting to find what it says about the value of research. It's not a simple process, however, as David Stuart reveals

Web 2.0 sites and technologies are increasingly recognised as important tools for the research community. They offer both new networking opportunities and the ability to share a wide variety of content easily, much of which wouldn’t have been captured in traditional publication processes.

What is also increasingly recognised, albeit currently to a far lesser extent, is the need for new metrics to measure the use and impact of these tools. This is to encourage wider adoption of the technologies, to enable researchers to benchmark their activities, and so that researchers that successfully make use of these new technologies can be recognised for their efforts.

However, while the web provides the opportunity for the collection of a wide variety of metrics, it is important that we don’t confuse what is available with what is meaningful. Even for a service such as Twitter that is relatively uncomplicated, open, and well established, the creation of useful metrics is by no means simple.

At its core, Twitter allows users to publish short posts of up to 140 characters in answer to a simple question. Originally this question was the personal ‘What are you doing?’ but it was changed in November 2009 to the broader ‘What’s happening?’; a change that reflects the way Twitter has evolved away from the personal status updates for which it was originally intended, to being used by a whole host of individuals and organisations for an ever increasing number of purposes.

As well as using Twitter to post short updates about their work activities, a researcher may also use Twitter to share links to more detailed work elsewhere, to comment on other people’s work, participate in conference discussions, have conversations with colleagues, or even to automatically share data from an experiment. Twitter not only eases certain processes that may otherwise have been more difficult, but importantly it offers the potential to measure such activities. However, while the expanding number of uses to which Twitter is being put is a sign of its usefulness as a tool, it inevitably creates difficulties when trying to gain real insight from that use.

Understanding the impact of any particular research output, whether it is a journal article or an update on Twitter, requires comparison with other similar outputs. We can no more draw a conclusion about the value of a researcher’s Twitter account by knowing that they have 20 followers, than we can judge the value of a research paper by knowing that it has 20 citations. While 20 citations may be considered above average in fields such as the social sciences and mathematics, it would be far less impressive in areas such as molecular biology or immunology, where there tend to be higher rates of citation.

Similarly, if we are to consider a researcher’s impact on Twitter in terms of how many followers they have, then we need to compare like with like. This is something most people would intuitively recognise if a comparison was made between the number of followers an average researcher has and the number of followers that Barack Obama has.

Followers depend on use

However, comparing like with like not only means comparing individuals or organisations in similar fields, but also comparing Twitter streams that are being used in similar ways. In the traditional research environment, it may be considered sufficient to compare the impact of an author’s work with the average number of citations in their specific field, with other factors that may influence the number of citations, such as the type of paper, given less consideration. In comparison, the purpose of a researcher’s Twitter account may have a significant impact on the elements that can be measured, such as the number of followers.

A Twitter user’s number of followers is, in many ways, the simplest indicator of a user’s impact. At first glance it would seem reasonable to suggest that a researcher with a Twitter stream of high-quality content could be expected to have far more followers than one of lower-quality content, at least once we have eliminated factors such as ‘celebrity’ and focus solely on the research community.

However, it is very easy to see how such an assumption quickly falls down because researchers will use Twitter in different ways and researchers are rarely all things to all people. The researcher who uses Twitter for engaging with a small number of colleagues online is likely to have far fewer followers than the researcher who is far more of a generalist and appeals to a wider community.

At the other end of the scale, the researcher who is using Twitter as a platform for sharing live data from various experiments may find they have no followers at all. After all, those making use of the data are unlikely to be reading it in a traditional manner, but rather will be automatically downloading the data for later analysis and may be leaving no discernable traces on the web.

In reality, a user’s number of followers would, hopefully, be quickly recognised as far too simplistic an indicator of someone’s Twitter impact to be of any use. There is little uniformity in the judgements that are made when people decide whether or not someone is worth following; some people seemingly follow anyone they come across, whilst others will try to restrict themselves to following a limited number of users, even if they may engage with members of the wider community when prompted.

As such, a user’s number of followers is far more to do with the number of people they follow and the number of updates they have made than any objective notion of quality; the young researcher following a thousand accounts and sending out a thousand updates will quickly find themselves with more followers than the thoughtful professor engaging in meaningful conversation. It is therefore necessary to consider more subtle metrics that take an increasing number of factors into account.

Twitter metrics

Unsurprisingly, the popularity of Twitter has meant that there are a number of online services that have emerged with the promise of helping with social media analysis and making the process simpler. These social media analysis services vary in both the metrics that they offer and the openness of the metrics. Overall, however, the tendency is to simplify. For example, in the case of the increasingly-popular Klout ( a person’s online presence can be reduced to a single Klout Score, in theory an overall indicator of social media influence.

Such a score is obviously meaningless if we are to accept that there is no single right way to use these technologies. Klout does supplement this score with three other indicators representing different aspects of a user’s social media presence: True Reach, which is the number of people a user influences; Amplication, or how much a user influences people; and Network Score, which is the influence of people in your network. However, as the exact way such metrics are calculated is not explicit, they are of little value in any meaningful conversation about a researcher’s online impact. Nonetheless the popularity of tools like Klout are likely to encourage a tendency to think that there is only one right way of using social media. This risks increasing people’s focus on the metric rather than the underlying aim, whether this is the successful engagement with colleagues, the public, or something else.

The difficulty in establishing new metrics does not mean that attempts shouldn’t be made to establish them, but rather emphasises the need to proceed with care. The research community isn’t unique in looking for social-media metrics, although its requirements are likely to be significantly different from those of the general public, often having more interest in an impact and information flows within a sub-network rather than out to the whole community. In comparison to other social network sites such as Facebook, the relative openness of Twitter offers the potential for wider network-based indicators. Rather than focusing on the number of followers, or even ratios between the number of followers and the number of streams a user is following, it is possible to take into consideration the position of a user within the wider network.

Such metrics are not simple, or obvious, and can only be established through discussion in the open rather than as proprietary metrics behind closed doors. Even then we cannot expect the establishment of a single metric to represent a researcher’s usage of social media. Rather, what is required is a barrage of metrics, each of which measures different aspects of potential online usage that can then be the start of a conversation about a researcher’s impact.

The creation of such metrics obviously presents a challenge, especially as we move from looking at a single open social-media site like Twitter, and take into consideration the wide range of content on sites that are far less open. Nonetheless it is a challenge that the research community needs to face if they are to successfully take advantage of the new technologies and successfully demonstrate the transfer of knowledge and the impact of research on the wider community.

David Stuart is a research associate at the Centre for e-Research, King’s College London, and recent author of ‘Facilitating access to the web of data: a guide for librarians’