Page 119 - Journal of Library Science in China 2020 Vol.46
P. 119
118 Journal of Library Science in China, Vol.12, 2020
by the ARWU and UK’s QS, and a single indicator one represented by Nature index. A single
indicator system focuses on a mono-index that embodies unique information, while a synthetic
criterion system is designed by combining various single but weighted indicators. In the latter
system, different university rankings use different indicators, which in turn are assigned different
weights (van Raan, 2005; N. C. LIU, CHEN, & L. LIU, 2005; Buela-Casal et al., 2007). Studies
have compared the differences between different ranking methods (Buela-Casal et al., 2007;
Saisana, d’Hombres, & Saltelli, 2011; Dill, & Soo, 2005; Aguillo et al., 2010). In general, various
indicators can be categorized into two main underlying aspects: a university’s reputation and its
research performance (Buela-Casal et al., 2007; Selten et al., 2020). In addition to the differences
in the content of indicators, rankings have different preferences in the types of the indicators. For
example, ARWU only relies on statistical indicators, while rankings by US News & World Report
and UK QS use a combination of statistical indicators and surveys. Both statistical indicators
and surveys have defects, as the ranking that relies heavily on statistical indicators usually pivots
publications in English as a lingua franca over those in German, French and other languages (van
Raan, 2005). Survey is also found to be biased (Vernon, Balas, & Momani, 2018).
More importantly, no matter what changes exist in the content or type of indicators, these
rankings have nothing to do with contributions that universities have made. University rankings
could bias resources allocation and reputation. In other words, a high-ranked university is easier
to obtain government funding and social recognition, which helps them to maintain its advantages
in the next ranking exercise (Barreto, 2013). Ranking could be manipulated as well. For example,
universities can target on improving some indicators such as publications by hiring productive star
scientists, thus possibly inflating their rankings (Miley, 2012). These principles are applicable to
libraries as well.
The paper is organized as follows: section 2 includes different ranking systems by changing
weights of indicators and data sources. Subsequently, section 3 shows rankings of UK QS top 100
universities in different ranking methods and real contributions of four universities, as well as the
supporting function of their libraries. Finally, following a discussion on quantitative indicators and
qualitative values in section 4, main conclusions are drawn in section 5.
1 Methodology
1.1 Methods
Except for unique single indicator, different synthetic criterion ranking systems use different
score algorithms. We can design various ranking methods of synthetic criterion to prove this.
Considering three cases of such a system as Table 1, we can assign different weights to the same
indicators, which lead to different outcomes. If A represents quantity and B means contribution, the
three criterion systems seem to have a balance of contribution and quantity.