Web spam pages use various techniques to achieve
higher-than-deserved rankings in a search engine’s
results. While human experts can identify
spam, it is too expensive to manually evaluate a
large number of pages. Instead, we propose techniques
to semi-automatically separate reputable,
good pages from spam. We first select a small set
of seed pages to be evaluated by an expert. Once
we manually identify the reputable seed pages, we
use the link structure of the web to discover other
pages that are likely to be good. In this paper
we discuss possible ways to implement the seed
selection and the discovery of good pages. We
present results of experiments run on the World
Wide Web indexed by AltaVista and evaluate the
performance of our techniques. Our results show
that we can effectively filter out spam from a significant
fraction of the web, based on a good seed
set of less than 200 sites.
R. Jäschke, und S. Rudolph. Contributions to the 11th International Conference on Formal Concept Analysis, Seite 19--34. Technische Universität Dresden, (Mai 2013)
T. Tran, N. Tran, A. Teka Hadgu, und R. Jäschke. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (EMNLP), Association for Computational Linguistics, (September 2015)
B. Pereira Nunes, R. Kawase, S. Dietze, D. Taibi, M. Casanova, und W. Nejdl. Proceedings of the Web of Linked Entities Workshop in conjuction with the 11th International Semantic Web Conference, Volume 906 von CEUR-WS.org, Seite 45--57. (November 2012)