talk on conference website
Quantitative science evaluation, such as university rankings, rely on man-made algorithms and man-made databases. The modelling decisions underlying this data-driven algorithmic science evaluation are, among other things, the outcome of a specific power structure in the science system. Power relations are especially visible, when negotiated during processes of boundary work. Therefore, we use the discourse on 'citation cartels', to shed light on a specific perception of fairness in the scientific system, as well as on the actors who are in charge. While doing so, we draw analogies to the discourse on search engine optimization.
Scientific evaluation as governance technique is conducted through different instruments which have intended and unintended effects. One aspect of evaluation is the measurement of research quality through the performance of scientific publications, for example, how often they are cited. The design of such performance indicators is one core task of bibliometrics as a discipline.
There is incidence that citation-based performance indicators might have side effects on citation behaviour. Those effects have to be considered by the bibliometrics community. On the one hand, they have to be considered with regard to indicator design aiming at achieving validity of measurement. On the other hand, and maybe more important, they have to be considered with regard to indicator use and its effect on science and society.
We find some of this behavioural adaptation analogously in the development of search engine optimization (SEO). Search engine rankings share one core principle with citation-based indicators: that relevance (quality) is understood to be measurable through incoming links (citations) to a website (publication). The discourse on SEO and which strategies are to be regarded as white hat SEO or black hat SEO led to a more or less stable set of 'allowed' activities, which are approved by the search engine monopolist Google.
Citation-based performance indicators are also the aim of optimization activities. One activity, which is believed to be undertaken by scientific journals, is the establishment of 'citation cartels' (groups of journals, which agree on mutually citing each other to boost their indicators). This form of strategic citation is widely regarded as morally corrupt. Beyond this specific type, there is an ongoing debate, which citation strategies are to be regarded scientific misconduct, and therefore threatening the 'fairness' of performance indicators.
In our talk, we will outline the discourse on strategic citation with examples, which show concerns or label some strategies as unethical, and some which demand detection and punishment of questionable behaviour. We especially point out that the request to embank strategic citation is often addressed to the publication database provider Thomson Reuters. Proceeding from this point, this opens up a new perspective on power structures in the science system.