QuickSearch:   Number of matching entries: 0.

AuthorTitleYearJournal/ProceedingsReftypeDOI/URL
Zesch, T. & Gurevych, I. Wisdom of crowds versus wisdom of linguists - measuring the semantic relatedness of words. 2010 Natural Language Engineering   article URL  
BibTeX:
@article{journals/nle/ZeschG10,
  author = {Zesch, Torsten and Gurevych, Iryna},
  title = {Wisdom of crowds versus wisdom of linguists - measuring the semantic relatedness of words.},
  journal = {Natural Language Engineering},
  year = {2010},
  volume = {16},
  number = {1},
  pages = {25-59},
  url = {http://dblp.uni-trier.de/db/journals/nle/nle16.html#ZeschG10}
}
Du, L., Buntine, W. L. & Jin, H. Sequential Latent Dirichlet Allocation: Discover Underlying Topic Structures within a Document. 2010 ICDM   inproceedings URL  
BibTeX:
@inproceedings{conf/icdm/DuBJ10,
  author = {Du, Lan and Buntine, Wray Lindsay and Jin, Huidong},
  title = {Sequential Latent Dirichlet Allocation: Discover Underlying Topic Structures within a Document.},
  booktitle = {ICDM},
  publisher = {IEEE Computer Society},
  year = {2010},
  pages = {148-157},
  url = {http://dblp.uni-trier.de/db/conf/icdm/icdm2010.html#DuBJ10}
}
Levy, O. & Goldberg, Y. Linguistic Regularities in Sparse and Explicit Word Representations. 2014 CoNLL   inproceedings URL  
BibTeX:
@inproceedings{conf/conll/LevyG14,
  author = {Levy, Omer and Goldberg, Yoav},
  title = {Linguistic Regularities in Sparse and Explicit Word Representations.},
  booktitle = {CoNLL},
  publisher = {ACL},
  year = {2014},
  pages = {171-180},
  url = {http://dblp.uni-trier.de/db/conf/conll/conll2014.html#LevyG14}
}
Yu, H.-F., Jain, P., Kar, P. & Dhillon, I. S. Large-scale Multi-label Learning with Missing Labels 2013   misc URL  
Abstract: The multi-label classification problem has generated significant interest in
cent years. However, existing approaches do not adequately address two key
allenges: (a) the ability to tackle problems with a large number (say
llions) of labels, and (b) the ability to handle data with missing labels. In
is paper, we directly address both these problems by studying the multi-label
oblem in a generic empirical risk minimization (ERM) framework. Our
amework, despite being simple, is surprisingly able to encompass several
cent label-compression based methods which can be derived as special cases of
r method. To optimize the ERM problem, we develop techniques that exploit the
ructure of specific loss functions - such as the squared loss function - to
fer efficient algorithms. We further show that our learning framework admits
rmal excess risk bounds even in the presence of missing labels. Our risk
unds are tight and demonstrate better generalization performance for low-rank
omoting trace-norm regularization when compared to (rank insensitive)
obenius norm regularization. Finally, we present extensive empirical results
a variety of benchmark datasets and show that our methods perform
gnificantly better than existing label compression based methods and can
ale up to very large datasets such as the Wikipedia dataset.
BibTeX:
@misc{yu2013largescale,
  author = {Yu, Hsiang-Fu and Jain, Prateek and Kar, Purushottam and Dhillon, Inderjit S.},
  title = {Large-scale Multi-label Learning with Missing Labels},
  year = {2013},
  note = {cite arxiv:1307.5101},
  url = {http://arxiv.org/abs/1307.5101}
}
Mirowski, P., Ranzato, M. & LeCun, Y. Dynamic Auto-Encoders for Semantic Indexing 2010   inproceedings URL  
BibTeX:
@inproceedings{noauthororeditor,
  author = {Mirowski, Piotr and Ranzato, Marc'Aurelio and LeCun, Yann},
  title = {Dynamic Auto-Encoders for Semantic Indexing},
  year = {2010},
  url = {http://yann.lecun.com/exdb/publis/pdf/mirowski-nipsdl-10.pdf}
}
Karampatziakis, N. & Mineiro, P. Discriminative Features via Generalized Eigenvectors 2013   misc URL  
Abstract: Representing examples in a way that is compatible with the underlying
assifier can greatly enhance the performance of a learning system. In this
per we investigate scalable techniques for inducing discriminative features
taking advantage of simple second order structure in the data. We focus on
lticlass classification and show that features extracted from the generalized
genvectors of the class conditional second moments lead to classifiers with
cellent empirical performance. Moreover, these features have attractive
eoretical properties, such as inducing representations that are invariant to
near transformations of the input. We evaluate classifiers built from these
atures on three different tasks, obtaining state of the art results.
BibTeX:
@misc{karampatziakis2013discriminative,
  author = {Karampatziakis, Nikos and Mineiro, Paul},
  title = {Discriminative Features via Generalized Eigenvectors},
  year = {2013},
  note = {cite arxiv:1310.1934},
  url = {http://arxiv.org/abs/1310.1934}
}
Kohlschütter, C., Fankhauser, P. & Nejdl, W. Boilerplate Detection using Shallow Text Features 2010 Proc. of 3rd ACM International Conference on Web Search and Data Mining New York City, NY USA (WSDM 2010).   inproceedings  
BibTeX:
@inproceedings{conf/wsdm/KohlschutterFN10,
  author = {Kohlschütter, Christian and Fankhauser, Peter and Nejdl, Wolfgang},
  title = {Boilerplate Detection using Shallow Text Features},
  booktitle = {Proc. of 3rd ACM International Conference on Web Search and Data Mining New York City, NY USA (WSDM 2010).},
  year = {2010}
}

Created by JabRef export filters on 28/04/2024 by the social publication management platform PUMA