%0 %0 Conference Proceedings %A Levy, Omer & Goldberg, Yoav %D 2014 %T Linguistic Regularities in Sparse and Explicit Word Representations. %E Morante, Roser & tau Yih, Wen %B CoNLL %C %I ACL %V %6 %N %P 171-180 %& %Y %S %7 %8 %9 %? %! %Z %@ 978-1-941643-02-0 %( %) %* %L %M %1 %2 %3 inproceedings %4 conf/conll/2014 %# %$ %F conf/conll/LevyG14 %K kallimachos, posts, representation, similarity, toread, word %X %Z %U http://dblp.uni-trier.de/db/conf/conll/conll2014.html#LevyG14 %+ %^ %0 %0 Generic %A Karampatziakis, Nikos & Mineiro, Paul %D 2013 %T Discriminative Features via Generalized Eigenvectors %E %B %C %I %V %6 %N %P %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 Discriminative Features via Generalized Eigenvectors %3 misc %4 %# %$ %F karampatziakis2013discriminative %K analysis, eigenvector, feature, kallimachos %X Representing examples in a way that is compatible with the underlying classifier can greatly enhance the performance of a learning system. In this paper we investigate scalable techniques for inducing discriminative features by taking advantage of simple second order structure in the data. We focus on multiclass classification and show that features extracted from the generalized eigenvectors of the class conditional second moments lead to classifiers with excellent empirical performance. Moreover, these features have attractive theoretical properties, such as inducing representations that are invariant to linear transformations of the input. We evaluate classifiers built from these features on three different tasks, obtaining state of the art results. %Z cite arxiv:1310.1934 %U http://arxiv.org/abs/1310.1934 %+ %^ %0 %0 Generic %A Yu, Hsiang-Fu; Jain, Prateek; Kar, Purushottam & Dhillon, Inderjit S. %D 2013 %T Large-scale Multi-label Learning with Missing Labels %E %B %C %I %V %6 %N %P %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 Large-scale Multi-label Learning with Missing Labels %3 misc %4 %# %$ %F yu2013largescale %K classification, kallimachos, label, large, learning, multi %X The multi-label classification problem has generated significant interest in recent years. However, existing approaches do not adequately address two key challenges: (a) the ability to tackle problems with a large number (say millions) of labels, and (b) the ability to handle data with missing labels. In this paper, we directly address both these problems by studying the multi-label problem in a generic empirical risk minimization (ERM) framework. Our framework, despite being simple, is surprisingly able to encompass several recent label-compression based methods which can be derived as special cases of our method. To optimize the ERM problem, we develop techniques that exploit the structure of specific loss functions - such as the squared loss function - to offer efficient algorithms. We further show that our learning framework admits formal excess risk bounds even in the presence of missing labels. Our risk bounds are tight and demonstrate better generalization performance for low-rank promoting trace-norm regularization when compared to (rank insensitive) Frobenius norm regularization. Finally, we present extensive empirical results on a variety of benchmark datasets and show that our methods perform significantly better than existing label compression based methods and can scale up to very large datasets such as the Wikipedia dataset. %Z cite arxiv:1307.5101 %U http://arxiv.org/abs/1307.5101 %+ %^ %0 %0 Conference Proceedings %A Du, Lan; Buntine, Wray Lindsay & Jin, Huidong %D 2010 %T Sequential Latent Dirichlet Allocation: Discover Underlying Topic Structures within a Document. %E Webb, Geoffrey I.; 0001, Bing Liu; Zhang, Chengqi; Gunopulos, Dimitrios & Wu, Xindong %B ICDM %C %I IEEE Computer Society %V %6 %N %P 148-157 %& %Y %S %7 %8 %9 %? %! %Z %@ 978-0-7695-4256-0 %( %) %* %L %M %1 %2 %3 inproceedings %4 conf/icdm/2010 %# %$ %F conf/icdm/DuBJ10 %K genre, kallimachos, plot, toread %X %Z %U http://dblp.uni-trier.de/db/conf/icdm/icdm2010.html#DuBJ10 %+ %^ %0 %0 Conference Proceedings %A Kohlschütter, Christian; Fankhauser, Peter & Nejdl, Wolfgang %D 2010 %T Boilerplate Detection using Shallow Text Features %E %B Proc. of 3rd ACM International Conference on Web Search and Data Mining New York City, NY USA (WSDM 2010). %C %I %V %6 %N %P %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 %3 inproceedings %4 %# %$ %F conf/wsdm/KohlschutterFN10 %K features, kallimachos, text, toread %X %Z %U %+ %^ %0 %0 Conference Proceedings %A Mirowski, Piotr; Ranzato, Marc'Aurelio & LeCun, Yann %D 2010 %T Dynamic Auto-Encoders for Semantic Indexing %E of the NIPS 2010 Workshop on Deep Learning, Proceedings %B %C %I %V %6 %N %P %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 Neuer Tab %3 inproceedings %4 %# %$ %F noauthororeditor %K deep, kallimachos, lda, learning, model, toread %X %Z %U http://yann.lecun.com/exdb/publis/pdf/mirowski-nipsdl-10.pdf %+ %^ %0 %0 Journal Article %A Zesch, Torsten & Gurevych, Iryna %D 2010 %T Wisdom of crowds versus wisdom of linguists - measuring the semantic relatedness of words. %E %B Natural Language Engineering %C %I %V 16 %6 %N 1 %P 25-59 %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 %3 article %4 %# %$ %F journals/nle/ZeschG10 %K datasets, kallimachos, measure, posts, relatedness, semantic %X %Z %U http://dblp.uni-trier.de/db/journals/nle/nle16.html#ZeschG10 %+ %^