%0 %0 Report %A Doerfel, Stephan; Zoller, Daniel; Singer, Philipp; Niebler, Thomas; Hotho, Andreas & Strohmaier, Markus %D 2014 %T Of course we share! Testing Assumptions about Social Tagging Systems %E %B %C %I %V %6 %N %P %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 Of course we share! Testing Assumptions about Social Tagging Systems %3 techreport %4 %# %$ %F doerfel2014course %K 2014, analysis, assumptions, bibsonomy, data, folksonomy, log, myown, share, social, tagging, testing, weblog %X Social tagging systems have established themselves as an important part in today's web and have attracted the interest from our research community in a variety of investigations. The overall vision of our community is that simply through interactions with the system, i.e., through tagging and sharing of resources, users would contribute to building useful semantic structures as well as resource indexes using uncontrolled vocabulary not only due to the easy-to-use mechanics. Henceforth, a variety of assumptions about social tagging systems have emerged, yet testing them has been difficult due to the absence of suitable data. In this work we thoroughly investigate three available assumptions - e.g., is a tagging system really social? - by examining live log data gathered from the real-world public social tagging system BibSonomy. Our empirical results indicate that while some of these assumptions hold to a certain extent, other assumptions need to be reflected and viewed in a very critical light. Our observations have implications for the design of future search and other algorithms to better reflect the actual user behavior. %Z cite arxiv:1401.0629 %U http://arxiv.org/abs/1401.0629 %+ %^ %0 %0 Journal Article %A Dem\vs,ar, Janez %D 2006 %T Statistical Comparisons of Classifiers over Multiple Data Sets %E %B J. Mach. Learn. Res. %C %I JMLR.org %V 7 %6 %N %P 1--30 %& %Y %S %7 %8 December %9 %? %! %Z %@ 1532-4435 %( %) %* %L %M %1 %2 Statistical Comparisons of Classifiers over Multiple Data Sets %3 article %4 %# %$ %F demvsar2006statistical %K classification, prediction, significance, testing %X While methods for comparing two learning algorithms on a single data set have been scrutinized for quite some time already, the issue of statistical tests for comparisons of more algorithms on multiple data sets, which is even more essential to typical machine learning studies, has been all but ignored. This article reviews the current practice and then theoretically and empirically examines several suitable tests. Based on that, we recommend a set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparison of more classifiers over multiple data sets. Results of the latter can also be neatly presented with the newly introduced CD (critical difference) diagrams. %Z %U http://dl.acm.org/citation.cfm?id=1248547.1248548 %+ %^ %0 %0 Journal Article %A Vuong, Quang H. %D 1989 %T Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses %E %B Econometrica %C %I The Econometric Society %V 57 %6 %N 2 %P pp. 307-333 %& %Y %S %7 %8 %9 %? %! %Z %@ 00129682 %( %) %* %L %M %1 %2 Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses on JSTOR %3 article %4 %# %$ %F vuong1989likelihood %K comparision, hypothesis, likelihood, powerLaw, testing %X In this paper, we develop a classical approach to model selection. Using the Kullback-Leibler Information Criterion to measure the closeness of a model to the truth, we propose simple likelihood-ratio based statistics for testing the null hypothesis that the competing models are equally close to the true data generating process against the alternative hypothesis that one model is closer. The tests are directional and are derived successively for the cases where the competing models are non-nested, overlapping, or nested and whether both, one, or neither is misspecified. As a prerequisite, we fully characterize the asymptotic distribution of the likelihood ratio statistic under the most general conditions. We show that it is a weighted sum of chi-square distribution or a normal distribution depending on whether the distributions in the competing models closest to the truth are observationally identical. We also propose a test of this latter condition. %Z %U http://www.jstor.org/stable/1912557 %+ %^