Hotho, A.; Staab, S. & Stumme, G.: Explaining Text Clustering Results using Semantic Structures. In: Lavrač, N.; Gamberger, D. & Todorovski, H. B. (Hrsg.):
Knowledge Discovery in Databases: PKDD 2003, 7th European Conference on Principles and Practice of Knowledge Discovery in Databases. Heidelberg: Springer, 2003 (LNAI 2838), S. 217-228
[Volltext] [Kurzfassung]
[BibTeX]
Common text clustering techniques offer rather poor capabilities
for explaining to their users why a particular result has been
achieved. They have the disadvantage that they do not relate
semantically nearby terms and that they cannot explain how
resulting clusters are related to each other.
In this paper, we discuss a way of integrating a large thesaurus
and the computation of lattices of resulting clusters into common text clustering
in order to overcome these two problems.
As its major result, our approach achieves an explanation using an
appropriate level of granularity at the concept level as well as
an appropriate size and complexity of the explaining lattice of
resulting clusters.
Hotho, A.; Staab, S. & Stumme, G.:
Text Clustering Based on Background Knowledge. , 2003
[Volltext] [Kurzfassung]
[BibTeX]
Text document clustering plays an important role in providing intuitive
navigation and browsing mechanisms by organizing large amounts of information
into a small number of meaningful clusters. Standard partitional or agglomerative
clustering methods efficiently compute results to this end.
However, the bag of words representation used for these clustering methods is often
unsatisfactory as it ignores relationships between important terms that do not
co-occur literally. Also, it is mostly left to the user to find out why a particular partitioning
has been achieved, because it is only specified extensionally. In order to
deal with the two problems, we integrate background knowledge into the process of
clustering text documents.
First, we preprocess the texts, enriching their representations by background knowledge
provided in a core ontology — in our application Wordnet. Then, we cluster
the documents by a partitional algorithm. Our experimental evaluation on Reuters
newsfeeds compares clustering results with pre-categorizations of news. In the experiments,
improvements of results by background knowledge compared to the baseline
can be shown for many interesting tasks.
Second, the clustering partitions the large number of documents to a relatively small
number of clusters, which may then be analyzed by conceptual clustering. In our approach,
we applied Formal Concept Analysis. Conceptual clustering techniques are
known to be too slow for directly clustering several hundreds of documents, but they
give an intensional account of cluster results. They allow for a concise description
of commonalities and distinctions of different clusters. With background knowledge
they even find abstractions like “food” (vs. specializations like “beef” or “corn”).
Thus, in our approach, partitional clustering reduces first the size of the problem
such that it becomes tractable for conceptual clustering, which then facilitates the
understanding of the results.
Hotho, A. & Stumme, G.: Conceptual Clustering of Text Clusters. In: Kókai, G. & Zeidler, J. (Hrsg.):
Proc. Fachgruppentreffen Maschinelles Lernen (FGML 2002). 2002, S. 37-45
[Volltext]
[BibTeX]
Stumme, G.; Taouil, R.; Bastide, Y. & Lakhal, L.: Conceptual Clustering with Iceberg Concept Lattices. In: Klinkenberg, R.; Rüping, S.; Fick, A.; Henze, N.; Herzog, C.; Molitor, R. & Schröder, O. (Hrsg.):
Proc. GI-Fachgruppentreffen Maschinelles Lernen (FGML'01). Universität Dortmund 763: 2001
[Volltext]
[BibTeX]