Doerfel, S.; Zoller, D.; Singer, P.; Niebler, T.; Hotho, A. & Strohmaier, M.
(2014):
Of course we share! Testing Assumptions about Social Tagging Systems.
[Volltext] [Kurzfassung] [BibTeX] [Endnote] Social tagging systems have established themselves as an important part in day's web and have attracted the interest from our research community in a riety of investigations. The overall vision of our community is that simply rough interactions with the system, i.e., through tagging and sharing of sources, users would contribute to building useful semantic structures as ll as resource indexes using uncontrolled vocabulary not only due to the sy-to-use mechanics. Henceforth, a variety of assumptions about social gging systems have emerged, yet testing them has been difficult due to the sence of suitable data. In this work we thoroughly investigate three ailable assumptions - e.g., is a tagging system really social? - by examining ve log data gathered from the real-world public social tagging system bSonomy. Our empirical results indicate that while some of these assumptions ld to a certain extent, other assumptions need to be reflected and viewed in very critical light. Our observations have implications for the design of ture search and other algorithms to better reflect the actual user behavior.
@techreport{doerfel2014course,
author = {Doerfel, Stephan and Zoller, Daniel and Singer, Philipp and Niebler, Thomas and Hotho, Andreas and Strohmaier, Markus},
title = {Of course we share! Testing Assumptions about Social Tagging Systems},
year = {2014},
note = {cite arxiv:1401.0629},
url = {http://arxiv.org/abs/1401.0629},
keywords = {2014, analysis, assumptions, bibsonomy, data, folksonomy, log, myown, share, social, tagging, testing, weblog},
abstract = {Social tagging systems have established themselves as an important part intoday's web and have attracted the interest from our research community in avariety of investigations. The overall vision of our community is that simplythrough interactions with the system, i.e., through tagging and sharing ofresources, users would contribute to building useful semantic structures aswell as resource indexes using uncontrolled vocabulary not only due to theeasy-to-use mechanics. Henceforth, a variety of assumptions about socialtagging systems have emerged, yet testing them has been difficult due to theabsence of suitable data. In this work we thoroughly investigate threeavailable assumptions - e.g., is a tagging system really social? - by examininglive log data gathered from the real-world public social tagging systemBibSonomy. Our empirical results indicate that while some of these assumptionshold to a certain extent, other assumptions need to be reflected and viewed ina very critical light. Our observations have implications for the design offuture search and other algorithms to better reflect the actual user behavior.}
}
%0 = techreport
%A = Doerfel, Stephan and Zoller, Daniel and Singer, Philipp and Niebler, Thomas and Hotho, Andreas and Strohmaier, Markus
%B = }
%C =
%D = 2014
%I =
%T = Of course we share! Testing Assumptions about Social Tagging Systems}
%U = http://arxiv.org/abs/1401.0629
|
|
J |
Demšar, J.
(2006):
Statistical Comparisons of Classifiers over Multiple Data Sets.
In: J. Mach. Learn. Res.,
Vol. 7,
Verlag/Publisher: JMLR.org.
Erscheinungsjahr/Year: 2006.
Seiten/Pages: 1-30.
[Volltext] [Kurzfassung] [BibTeX]
[Endnote]
While methods for comparing two learning algorithms on a single data set have been scrutinized for quite some time already, the issue of statistical tests for comparisons of more algorithms on multiple data sets, which is even more essential to typical machine learning studies, has been all but ignored. This article reviews the current practice and then theoretically and empirically examines several suitable tests. Based on that, we recommend a set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparison of more classifiers over multiple data sets. Results of the latter can also be neatly presented with the newly introduced CD (critical difference) diagrams.
@article{demvsar2006statistical,
author = {Demšar, Janez},
title = {Statistical Comparisons of Classifiers over Multiple Data Sets},
journal = {J. Mach. Learn. Res.},
publisher = {JMLR.org},
year = {2006},
volume = {7},
pages = {1--30},
url = {http://dl.acm.org/citation.cfm?id=1248547.1248548},
issn = {1532-4435},
keywords = {classification, prediction, significance, testing},
abstract = {While methods for comparing two learning algorithms on a single data set have been scrutinized for quite some time already, the issue of statistical tests for comparisons of more algorithms on multiple data sets, which is even more essential to typical machine learning studies, has been all but ignored. This article reviews the current practice and then theoretically and empirically examines several suitable tests. Based on that, we recommend a set of simple, yet safe and robust non-parametric tests for statistical comparisons of classifiers: the Wilcoxon signed ranks test for comparison of two classifiers and the Friedman test with the corresponding post-hoc tests for comparison of more classifiers over multiple data sets. Results of the latter can also be neatly presented with the newly introduced CD (critical difference) diagrams.}
}
%0 = article
%A = Demšar, Janez
%D = 2006
%I = JMLR.org
%T = Statistical Comparisons of Classifiers over Multiple Data Sets
%U = http://dl.acm.org/citation.cfm?id=1248547.1248548
|
J |
Vuong, Q. H.
(1989):
Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses.
In: Econometrica,
Ausgabe/Number: 2,
Vol. 57,
Verlag/Publisher: The Econometric Society.
Erscheinungsjahr/Year: 1989.
Seiten/Pages: pp. 307-333.
[Volltext] [Kurzfassung] [BibTeX]
[Endnote]
In this paper, we develop a classical approach to model selection. Using the Kullback-Leibler Information Criterion to measure the closeness of a model to the truth, we propose simple likelihood-ratio based statistics for testing the null hypothesis that the competing models are equally close to the true data generating process against the alternative hypothesis that one model is closer. The tests are directional and are derived successively for the cases where the competing models are non-nested, overlapping, or nested and whether both, one, or neither is misspecified. As a prerequisite, we fully characterize the asymptotic distribution of the likelihood ratio statistic under the most general conditions. We show that it is a weighted sum of chi-square distribution or a normal distribution depending on whether the distributions in the competing models closest to the truth are observationally identical. We also propose a test of this latter condition.
@article{vuong1989likelihood,
author = {Vuong, Quang H.},
title = {Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses},
journal = {Econometrica},
publisher = {The Econometric Society},
year = {1989},
volume = {57},
number = {2},
pages = {pp. 307-333},
url = {http://www.jstor.org/stable/1912557},
issn = {00129682},
keywords = {comparision, hypothesis, likelihood, powerLaw, testing},
abstract = {In this paper, we develop a classical approach to model selection. Using the Kullback-Leibler Information Criterion to measure the closeness of a model to the truth, we propose simple likelihood-ratio based statistics for testing the null hypothesis that the competing models are equally close to the true data generating process against the alternative hypothesis that one model is closer. The tests are directional and are derived successively for the cases where the competing models are non-nested, overlapping, or nested and whether both, one, or neither is misspecified. As a prerequisite, we fully characterize the asymptotic distribution of the likelihood ratio statistic under the most general conditions. We show that it is a weighted sum of chi-square distribution or a normal distribution depending on whether the distributions in the competing models closest to the truth are observationally identical. We also propose a test of this latter condition.}
}
%0 = article
%A = Vuong, Quang H.
%D = 1989
%I = The Econometric Society
%T = Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses
%U = http://www.jstor.org/stable/1912557
|