Publications
Power laws in citation distributions: evidence from Scopus
Brzezinski, M.
Scientometrics, 103(1) 213-228 (2015) [pdf]
Modeling distributions of citations to scientific papers is crucial for understanding how science develops. However, there is a considerable empirical controversy on which statistical model fits the citation distributions best. This paper is concerned with rigorous empirical detection of power-law behaviour in the distribution of citations received by the most highly cited scientific papers. We have used a large, novel data set on citations to scientific papers published between 1998 and 2002 drawn from Scopus. The power-law model is compared with a number of alternative models using a likelihood ratio test. We have found that the power-law hypothesis is rejected for around half of the Scopus fields of science. For these fields of science, the Yule, power-law with exponential cut-off and log-normal distributions seem to fit the data better than the pure power-law model. On the other hand, when the power-law hypothesis is not rejected, it is usually empirically indistinguishable from most of the alternative models. The pure power-law model seems to be the best model only for the most highly cited papers in “Physics and Astronomy”. Overall, our results seem to support theories implying that the most highly cited scientific papers follow the Yule, power-law with exponential cut-off or log-normal distribution. Our findings suggest also that power laws in citation distributions, when present, account only for a very small fraction of the published papers (less than 1 % for most of science fields) and that the power-law scaling parameter (exponent) is substantially higher (from around 3.2 to around 4.7) than found in the older literature.
Network analysis of Zentralblatt MATH data
Cerinšek, M. & Batagelj, V.
Scientometrics, 102(1) 977-1001 (2015) [pdf]
We analyze the data about works (papers, books) from the time period 1990–2010 that are collected in Zentralblatt MATH database. The data were converted into four 2-mode networks (works
Ranking top economics and finance journals using Microsoft academic search versus Google scholar: How does the new publish or perish option compare?
Haley, M. R.
Journal of the Association for Information Science and Technology, 65(5) 1079-1084 (2014) [pdf]
Recently, Harzing's Publish or Perish software was updated to include Microsoft Academic Search as a second citation database search option for computing various citation-based metrics. This article explores the new search option by scoring 50 top economics and finance journals and comparing them with the results obtained using the original Google Scholar-based search option. The new database delivers significantly smaller scores for all metrics, but the rank correlations across the two databases for the h-index, g-index, AWCR, and e-index are significantly correlated, especially when the time frame is restricted to more recent years. Comparisons are also made to the Article Influence score from eigenfactor.org and to the RePEc h-index, both of which adjust for journal-level self-citations.
How to Make More Published Research True
Ioannidis, J. P. A.
PLoS Med, 11(10) e1001747 (2014) [pdf]
<sec> <title></title> <p>In a 2005 paper that has been accessed more than a million times, John Ioannidis explained why most published research findings were false. Here he revisits the topic, this time to address how to improve matters.</p> <p><italic>Please see later in the article for the Editors' Summary</italic></p> </sec>
Scholarometer: A System for Crowdsourcing Scholarly Impact Metrics
Kaur, J.; JafariAsbagh, M.; Radicchi, F. & Menczer, F.
, 'Proceedings of the 2014 ACM Conference on Web Science', WebSci '14, ACM, New York, NY, USA, [10.1145/2615569.2615669], 285-286 (2014) [pdf]
Scholarometer (scholarometer.indiana.edu) is a social tool developed to facilitate citation analysis and help evaluate the impact of authors. The Scholarometer service allows scholars to compute various citation-based impact measures. In exchange, users provide disciplinary annotations of authors, which allow for the computation of discipline-specific statistics and discipline-neutral impact metrics. We present here two improvements of our system. First, we integrated a new universal impact metric hs that uses crowdsourced data to calculate the global rank of a scholar across disciplinary boundaries. Second, improvements made in ambiguous name classification have increased the accuracy from 80% to 87%.
The number of scholarly documents on the public web
Khabsa, M. & Giles, C. L.
PLoS ONE, 9(5) e93949 (2014) [pdf]
The number of scholarly documents available on the web is estimated using capture/recapture methods by studying the coverage of two major academic search engines: Google Scholar and Microsoft Academic Search. Our estimates show that at least 114 million English-language scholarly documents are accessible on the web, of which Google Scholar has nearly 100 million. Of these, we estimate that at least 27 million (24%) are freely available since they do not require a subscription or payment of any kind. In addition, at a finer scale, we also estimate the number of scholarly documents on the web for fifteen fields: Agricultural Science, Arts and Humanities, Biology, Chemistry, Computer Science, Economics and Business, Engineering, Environmental Sciences, Geosciences, Material Science, Mathematics, Medicine, Physics, Social Sciences, and Multidisciplinary, as defined by Microsoft Academic Search. In addition, we show that among these fields the percentage of documents defined as freely available varies significantly, i.e., from 12 to 50%.
The decline and fall of Microsoft Academic Search
Noorden, R. V.
2014 [pdf]
Validating online reference managers for scholarly impact measurement
Li, X.; Thelwall, M. & Giustini, D.
Scientometrics, 91(2) 461-471 (2012) [pdf]
This paper investigates whether CiteULike and Mendeley are useful for measuring scholarly influence, using a sample of 1,613 papers published in Nature and Science in 2007. Traditional citation counts from the Web of Science (WoS) were used as benchmarks to compare with the number of users who bookmarked the articles in one of the two free online reference manager sites. Statistically significant correlations were found between the user counts and the corresponding WoS citation counts, suggesting that this type of influence is related in some way to traditional citation-based scholarly impact but the number of users of these systems seems to be still too small for them to challenge traditional citation indexes.
How the Scientific Community Reacts to Newly Submitted Preprints:
Article Downloads, Twitter Mentions, and Citations
Shuai, X.; Pepe, A. & Bollen, J.
(2012) [pdf]
We analyze the online response of the scientific community to the preprint
blication of scholarly articles. We employ a cohort of 4,606 scientific
ticles submitted to the preprint database arXiv.org between October 2010 and
ril 2011. We study three forms of reactions to these preprints: how they are
wnloaded on the arXiv.org site, how they are mentioned on the social media
te Twitter, and how they are cited in the scholarly record. We perform two
alyses. First, we analyze the delay and time span of article downloads and
itter mentions following submission, to understand the temporal configuration
these reactions and whether significant differences exist between them.
cond, we run correlation tests to investigate the relationship between
itter mentions and both article downloads and article citations. We find that
itter mentions follow rapidly after article submission and that they are
rrelated with later article downloads and later article citations, indicating
at social media may be an important factor in determining the scientific
pact of an article.
References made and citations received by scientific articles
Albarrán, P. & Ruiz-Castillo, J.
Journal of the American Society for Information Science and Technology, 62(1) 40-49 (2011) [pdf]
This article studies massive evidence about references made and citations received after a 5-year citation window by 3.7 million articles published in 1998 to 2002 in 22 scientific fields. We find that the distributions of references made and citations received share a number of basic features across sciences. Reference distributions are rather skewed to the right while citation distributions are even more highly skewed: The mean is about 20 percentage points to the right of the median, and articles with a remarkable or an outstanding number of citations represent about 9% of the total. Moreover, the existence of a power law representing the upper tail of citation distributions cannot be rejected in 17 fields whose articles represent 74.7% of the total. Contrary to the evidence in other contexts, the value of the scale parameter is above 3.5 in 13 of the 17 cases. Finally, power laws are typically small, but capture a considerable proportion of the total citations received.
Crowdsourcing in Article Evaluation
Peters, I.; Haustein, S. & Terliesner, J.
, 'ACM WebSci'11', 1-4 (2011) [pdf]
Qualitative journal evaluation makes use of cumulated content
scriptions of single articles. These can either be represented by
thor-generated keywords, professionally indexed subject
adings, automatically extracted terms or by reader-generated
gs as used in social bookmarking systems. It is assumed that
rticularly the users? view on article content differs significantly
om the authors? or indexers? perspectives. To verify this
sumption, title and abstract terms, author keywords, Inspec
bject headings, KeyWords PlusTM and tags are compared by
lculating the overlap between the respective datasets. Our
proach includes extensive term preprocessing (i.e. stemming,
elling unifications) to gain a homogeneous term collection.
en term overlap is calculated for every single document of the
taset, similarity values are low. Thus, the presented study
nfirms the assumption, that the different types of keywords
ch reflect a different perspective of the articles? contents and
at tags (cumulated across articles) can be used in journal
aluation to represent a reader-specific view on published
ntent.
Altmetrics: a Manifesto
Priem, J.; Taraborelli, D.; Groth, P. & Neylon, C.
2011 [pdf]
The Spread of Scientific Information: Insights from the Web Usage Statistics in PLoS Article-Level Metrics
Yan, K.-K. & Gerstein, M.
PLoS ONE, 6(5) e19917 (2011) [pdf]
<p>The presence of web-based communities is a distinctive signature of Web 2.0. The web-based feature means that information propagation within each community is highly facilitated, promoting complex collective dynamics in view of information exchange. In this work, we focus on a community of scientists and study, in particular, how the awareness of a scientific paper is spread. Our work is based on the web usage statistics obtained from the PLoS Article Level Metrics dataset compiled by PLoS. The cumulative number of HTML views was found to follow a long tail distribution which is reasonably well-fitted by a lognormal one. We modeled the diffusion of information by a random multiplicative process, and thus extracted the rates of information spread at different stages after the publication of a paper. We found that the spread of information displays two distinct decay regimes: a rapid downfall in the first month after publication, and a gradual power law decay afterwards. We identified these two regimes with two distinct driving processes: a short-term behavior driven by the fame of a paper, and a long-term behavior consistent with citation statistics. The patterns of information spread were found to be remarkably similar in data from different journals, but there are intrinsic differences for different types of web usage (HTML views and PDF downloads versus XML). These similarities and differences shed light on the theoretical understanding of different complex systems, as well as a better design of the corresponding web applications that is of high potential marketing impact.</p>
The rate of growth in scientific publication and the decline in coverage provided by Science Citation Index
Larsen, P. O. & von Ins, M.
Scientometrics, 84(3) 575-603 (2010) [pdf]
The growth rate of scientific publication has been studied from 1907 to 2007 using available data from a number of literature databases, including Science Citation Index (SCI) and Social Sciences Citation Index (SSCI). Traditional scientific publishing, that is publication in peer-reviewed journals, is still increasing although there are big differences between fields. There are no indications that the growth rate has decreased in the last 50 years. At the same time publication using new channels, for example conference proceedings, open archives and home pages, is growing fast. The growth rate for SCI up to 2007 is smaller than for comparable databases. This means that SCI was covering a decreasing part of the traditional scientific literature. There are also clear indications that the coverage by SCI is especially low in some of the scientific areas with the highest growth rate, including computer science and engineering sciences. The role of conference proceedings, open access archives and publications published on the net is increasing, especially in scientific fields with high growth rates, but this has only partially been reflected in the databases. The new publication channels challenge the use of the big databases in measurements of scientific productivity or output and of the growth rate of science. Because of the declining coverage and this challenge it is problematic that SCI has been used and is used as the dominant source for science indicators based on publication and citation numbers. The limited data available for social sciences show that the growth rate in SSCI was remarkably low and indicate that the coverage by SSCI was declining over time. National Science Indicators from Thomson Reuters is based solely on SCI, SSCI and Arts and Humanities Citation Index (AHCI). Therefore the declining coverage of the citation databases problematizes the use of this source.
What do citation counts measure? A review of studies on citing behavior
Bornmann, L. & Daniel, H.
Journal of Documentation, 64(1) 45-80 (2008) [pdf]
Purpose – The purpose of this paper is to present a narrative review of studies on the citing behavior of scientists, covering mainly research published in the last 15 years. Based on the results of these studies, the paper seeks to answer the question of the extent to which scientists are motivated to cite a publication not only to acknowledge intellectual and cognitive influences of scientific peers, but also for other, possibly non‐scientific, reasons.Design/methodology/approach – The review covers research published from the early 1960s up to mid‐2005 (approximately 30 studies on citing behavior‐reporting results in about 40 publications).Findings – The general tendency of the results of the empirical studies makes it clear that citing behavior is not motivated solely by the wish to acknowledge intellectual and cognitive influences of colleague scientists, since the individual studies reveal also other, in part non‐scientific, factors that play a part in the decision to cite. However, the results of the studies must also be deemed scarcely reliable: the studies vary widely in design, and their results can hardly be replicated. Many of the studies have methodological weaknesses. Furthermore, there is evidence that the different motivations of citers are “not so different or ‘randomly given’ to such an extent that the phenomenon of citation would lose its role as a reliable measure of impact”.Originality/value – Given the increasing importance of evaluative bibliometrics in the world of scholarship, the question “What do citation counts measure?” is a particularly relevant and topical issue.
Citation rank prediction based on bookmark counts: Exploratory case study of WWW06 papers
Saeed, A.; Afzal, M.; Latif, A. & Tochtermann, K.
, 'Multitopic Conference, 2008. INMIC 2008. IEEE International', [10.1109/INMIC.2008.4777769], 392-397 (2008) [pdf]
New developments in the collaborative and participatory role of Web has emerged new web based fast lane information systems like tagging and bookmarking applications. Same authors have shown elsewhere, that for same papers tags and bookmarks appear and gain volume very quickly in time as compared to citations and also hold good correlation with the citations. Studying the rank prediction models based on these systems gives advantage of gaining quick insight and localizing the highly productive and diffusible knowledge very early in time. This shows that it may be interesting to model the citation rank of a paper within the scope of a conference or journal issue, based on the bookmark counts (i-e count representing how many researchers have shown interest in a publication.) We used linear regression model for predicting citation ranks and compared both predicted citation rank models of bookmark counts and coauthor network counts for the papers of WWW06 conference. The results show that the rank prediction model based on bookmark counts is far better than the one based on coauthor network with mean absolute error for the first limited to the range of 5 and mean absolute error for second model above 18. Along with this we also compared the two bookmark prediction models out of which one was based on total citations rank as a dependent variable and the other was based on the adjusted citation rank. The citation rank was adjusted after subtracting the self and coauthor citations from total citations. The comparison reveals a significant improvement in the model and correlation after adjusting the citation rank. This may be interpreted that the bookmarking mechanisms represents the phenomenon similar to global discovery of a publication. While in the coauthor nets the papers are communicated personally and this communication or selection may not be captured within the bookmarking systems.
Earlier Web usage statistics as predictors of later citation impact
Brody, T.; Harnad, S. & Carr, L.
Journal of the American Society for Information Science and Technology, 57(8) 1060-1072 (2006) [pdf]
The use of citation counts to assess the impact of research articles is well established. However, the citation impact of an article can only be measured several years after it has been published. As research articles are increasingly accessed through the Web, the number of times an article is downloaded can be instantly recorded and counted. One would expect the number of times an article is read to be related both to the number of times it is cited and to how old the article is. The authors analyze how short-term Web usage impact predicts medium-term citation impact. The physics e-print archive—arXiv.org—is used to test this.
The influence of author self-citations on bibliometric meso-indicators. The case of european universities
Thijs, B. & Glänzel, W.
Scientometrics, 66(1) 71-80 (2006) [pdf]
In earlier studies by the authors, basic regularities of author self-citations have been analysed. These regularities are related to the ageing, to the relation between self-citations and foreign citations, to the interdependence of self-citations with other bibliometric indicators and to the influence of co-authorship on self-citation behaviour. Although both national and subject specific peculiarities influence the share of self-citations at the macro level, the authors came to the conclusion that - at this level of aggregation - there is practically no need for excluding self-citations. The aim of the present study is to answer the question in how far the influence of author self-citations on bibliometric meso-indicators deviates from that at the macro level, and to what extent national reference standards can be used in bibliometric meso analyses. In order to study the situation at the institutional level, a selection of twelve European universities representing different countries and different research profiles have been made. The results show a quite complex situation at the meso-level, therefore we suggest the usage of both indicators, including and excluding self-citations.
A compendium of issues for citation analysis
Phelan, T. J.
Scientometrics, 45(1) 117-136 (1999) [pdf]
This paper examines a number of the criticisms that citation analysis has been subjected to over the years. It is argued that many of these criticisms have been based on only limited examinations of data in particular contexts and it remains unclear how broadly applicable these problems are to research conducted at different levels of analysis, in specific field, and among various national data sets. Relevant evidence is provided from analysis of Australian and international data.
Motivations for citation: A comparison of self citation and citation to others
Bonzi, S. & Snyder, H.
Scientometrics, 21(2) 245-254 (1991) [pdf]
The citation motivations among 51 self citing authors in several natural science disciplines were investigated. Results of a survey on reasons for both self citation and citation to others show that there are very few differences in motivation, and that there are plausible intellectual grounds for those differences which are substantial. Analysis of exposure in text reveals virtually no differences between self citations and citations to others. Analysis of individual disciplines also uncover no substantive differences in either motivation or exposure in text.