TY - RPRT AU - Doerfel, Stephan AU - Zoller, Daniel AU - Singer, Philipp AU - Niebler, Thomas AU - Hotho, Andreas AU - Strohmaier, Markus A2 - T1 - Of course we share! Testing Assumptions about Social Tagging Systems PB - AD - PY - 2014/ VL - IS - SP - EP - UR - http://arxiv.org/abs/1401.0629 DO - KW - 2014 KW - analysis KW - assumptions KW - bibsonomy KW - data KW - folksonomy KW - log KW - myown KW - share KW - social KW - tagging KW - testing KW - weblog L1 - N1 - Of course we share! Testing Assumptions about Social Tagging Systems N1 - N1 - AB - Social tagging systems have established themselves as an important part in

today's web and have attracted the interest from our research community in a

variety of investigations. The overall vision of our community is that simply

through interactions with the system, i.e., through tagging and sharing of

resources, users would contribute to building useful semantic structures as

well as resource indexes using uncontrolled vocabulary not only due to the

easy-to-use mechanics. Henceforth, a variety of assumptions about social

tagging systems have emerged, yet testing them has been difficult due to the

absence of suitable data. In this work we thoroughly investigate three

available assumptions - e.g., is a tagging system really social? - by examining

live log data gathered from the real-world public social tagging system

BibSonomy. Our empirical results indicate that while some of these assumptions

hold to a certain extent, other assumptions need to be reflected and viewed in

a very critical light. Our observations have implications for the design of

future search and other algorithms to better reflect the actual user behavior. ER - TY - JOUR AU - Demšar, Janez T1 - Statistical Comparisons of Classifiers over Multiple Data Sets JO - J. Mach. Learn. Res. PY - 2006/12 VL - 7 IS - SP - 1 EP - 30 UR - http://dl.acm.org/citation.cfm?id=1248547.1248548 DO - KW - classification KW - prediction KW - significance KW - testing L1 - SN - N1 - Statistical Comparisons of Classifiers over Multiple Data Sets N1 - AB - While methods for comparing two learning algorithms on a single data set have been scrutinized for quite some time already, the issue of statistical tests for comparisons of more algorithms on multiple data sets, which is even more essential to typical machine learning studies, has been all but ignored. This article reviews the current practice and then theoretically and empirically examines several suitable tests. Based on that, we recommend a set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparison of more classifiers over multiple data sets. Results of the latter can also be neatly presented with the newly introduced CD (critical difference) diagrams. ER - TY - JOUR AU - Vuong, Quang H. T1 - Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses JO - Econometrica PY - 1989/ VL - 57 IS - 2 SP - pp. 307 EP - 333 UR - http://www.jstor.org/stable/1912557 DO - KW - comparision KW - hypothesis KW - likelihood KW - powerLaw KW - testing L1 - SN - N1 - Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses on JSTOR N1 - AB - In this paper, we develop a classical approach to model selection. Using the Kullback-Leibler Information Criterion to measure the closeness of a model to the truth, we propose simple likelihood-ratio based statistics for testing the null hypothesis that the competing models are equally close to the true data generating process against the alternative hypothesis that one model is closer. The tests are directional and are derived successively for the cases where the competing models are non-nested, overlapping, or nested and whether both, one, or neither is misspecified. As a prerequisite, we fully characterize the asymptotic distribution of the likelihood ratio statistic under the most general conditions. We show that it is a weighted sum of chi-square distribution or a normal distribution depending on whether the distributions in the competing models closest to the truth are observationally identical. We also propose a test of this latter condition. ER -