Publications
Linguistic Regularities in Sparse and Explicit Word Representations.
Levy, O. & Goldberg, Y.
Morante, R. & tau Yih, W., ed., 'CoNLL', ACL, 171-180 (2014) [pdf]
Discriminative Features via Generalized Eigenvectors
Karampatziakis, N. & Mineiro, P.
(2013) [pdf]
Representing examples in a way that is compatible with the underlying
assifier can greatly enhance the performance of a learning system. In this
per we investigate scalable techniques for inducing discriminative features
taking advantage of simple second order structure in the data. We focus on
lticlass classification and show that features extracted from the generalized
genvectors of the class conditional second moments lead to classifiers with
cellent empirical performance. Moreover, these features have attractive
eoretical properties, such as inducing representations that are invariant to
near transformations of the input. We evaluate classifiers built from these
atures on three different tasks, obtaining state of the art results.
Large-scale Multi-label Learning with Missing Labels
Yu, H.-F.; Jain, P.; Kar, P. & Dhillon, I. S.
(2013) [pdf]
The multi-label classification problem has generated significant interest in
cent years. However, existing approaches do not adequately address two key
allenges: (a) the ability to tackle problems with a large number (say
llions) of labels, and (b) the ability to handle data with missing labels. In
is paper, we directly address both these problems by studying the multi-label
oblem in a generic empirical risk minimization (ERM) framework. Our
amework, despite being simple, is surprisingly able to encompass several
cent label-compression based methods which can be derived as special cases of
r method. To optimize the ERM problem, we develop techniques that exploit the
ructure of specific loss functions - such as the squared loss function - to
fer efficient algorithms. We further show that our learning framework admits
rmal excess risk bounds even in the presence of missing labels. Our risk
unds are tight and demonstrate better generalization performance for low-rank
omoting trace-norm regularization when compared to (rank insensitive)
obenius norm regularization. Finally, we present extensive empirical results
a variety of benchmark datasets and show that our methods perform
gnificantly better than existing label compression based methods and can
ale up to very large datasets such as the Wikipedia dataset.
Sequential Latent Dirichlet Allocation: Discover Underlying Topic Structures within a Document.
Du, L.; Buntine, W. L. & Jin, H.
Webb, G. I.; 0001, B. L.; Zhang, C.; Gunopulos, D. & Wu, X., ed., 'ICDM', IEEE Computer Society, 148-157 (2010) [pdf]
Boilerplate Detection using Shallow Text Features
Kohlschütter, C.; Fankhauser, P. & Nejdl, W.
, 'Proc. of 3rd ACM International Conference on Web Search and Data Mining New York City, NY USA (WSDM 2010).' (2010)
Dynamic Auto-Encoders for Semantic Indexing
Mirowski, P.; Ranzato, M. & LeCun, Y.
of the NIPS 2010 Workshop on Deep Learning, P., ed. (2010) [pdf]
Wisdom of crowds versus wisdom of linguists - measuring the semantic relatedness of words.
Zesch, T. & Gurevych, I.
Natural Language Engineering, 16(1) 25-59 (2010) [pdf]