Publications
Towards Few-Shot Time Series Anomaly Detection with Temporal Attention and Dynamic Thresholding
Nivarthi, C. P. & Sick, B.
, 'International Conference on Machine Learning and Applications (ICMLA)', IEEE, [10.1109/ICMLA58977.2023.00218], 1444-1450 (2023)
Anomaly detection plays a pivotal role in diverse realworld applications such as cybersecurity, fault detection, network
nitoring, predictive maintenance, and highly automated driving. However, obtaining labeled anomalous data can be a formidable
allenge, especially when anomalies exhibit temporal evolution. This paper introduces LATAM (Long short-term memory Autoencoder with Temporal Attention Mechanism) for few-shot anomaly detection, with the aim of enhancing detection performance in scenarios with limited labeled anomaly data. LATAM effectively captures temporal dependencies and emphasizes significant patterns in multivariate time series data. In our investigation, we
mprehensively evaluate LATAM against other anomaly detection models, particularly assessing its capability in few-shot learning
enarios where we have minimal examples from the normal class and none from the anomalous class in the training data. Our
perimental results, derived from real-world photovoltaic inverter data, highlight LATAM’s superiority, showcasing a substantial
% mean F1 score improvement, even when trained on a mere two-week dataset. Furthermore, LATAM demonstrates remarkable
sults on the open-source SWaT dataset, achieving a 12% boost in accuracy with only two days of training data. Moreover, we
troduce a simple yet effective dynamic thresholding mechanism, further enhancing the anomaly detection capabilities of LATAM.
is underscores LATAM’s efficacy in addressing the challenges posed by limited labeled anomalies in practical scenarios and it
oves valuable for downstream tasks involving temporal representation and time series prediction, extending its utility beyond
omaly detection applications.
Never-Ending Learning
Mitchell, T.; Cohen, W.; Hruscha, E.; Talukdar, P.; Betteridge, J.; Carlson, A.; Dalvi, B.; Gardner, M.; Kisiel, B.; Krishnamurthy, J.; Lao, N.; Mazaitis, K.; Mohammad, T.; Nakashole, N.; Platanios, E.; Ritter, A.; Samadi, M.; Settles, B.; Wang, R.; Wijaya, D.; Gupta, A.; Chen, X.; Saparov, A.; Greaves, M. & Welling, J.
, 'AAAI' (2015) [pdf]
Human-level control through deep reinforcement learning
Mnih, V.; Kavukcuoglu, K.; Silver, D.; Rusu, A. A.; Veness, J.; Bellemare, M. G.; Graves, A.; Riedmiller, M.; Fidjeland, A. K.; Ostrovski, G.; Petersen, S.; Beattie, C.; Sadik, A.; Antonoglou, I.; King, H.; Kumaran, D.; Wierstra, D.; Legg, S. & Hassabis, D.
Nature, 518(7540) 529-533 (2015) [pdf]
ConDist: A Context-Driven Categorical Distance Measure
Ring, M.; Otto, F.; Becker, M.; Niebler, T.; Landes, D. & Hotho, A.
ECMLPKDD2015, ed. (2015)
Large-scale factorization of type-constrained multi-relational data
Krompass, D.; Nickel, M. & Tresp, V.
, 'International Conference on Data Science and Advanced Analytics, DSAA 2014, Shanghai, China, October 30 - November 1, 2014', IEEE, [10.1109/DSAA.2014.7058046], 18-24 (2014) [pdf]
An Introduction to Ontology Learning
Lehmann, J. & Voelker, J.
Lehmann, J. & Voelker, J., ed., 'Perspectives on Ontology Learning', AKA / IOS Press, ix-xvi (2014) [pdf]
From Topic Models to Semi-supervised Learning: Biasing Mixed-Membership Models to Exploit Topic-Indicative Features in Entity Clustering.
Balasubramanyan, R.; Dalvi, B. B. & Cohen, W. W.
Blockeel, H.; Kersting, K.; Nijssen, S. & Zelezný, F., ed., 'ECML/PKDD (2)', 8189(), Lecture Notes in Computer Science, Springer, 628-642 (2013) [pdf]
Towards a Productivity Measurement Model for Technology Mediated Learning Services
Bitzer, P. & Söllner, M.
, 'European Conference on Information Systems (ECIS)', Utrecht, Netherlands (accepted for publication) (2013)
Towards a Reference Model for a Productivity-optimized Delivery of Technology Mediated
Bitzer, P.; Weiß, F. & Leimeister, J. M.
, 'Eighth International Conference on Design Science Research in Information Systems and Technology (DESRIST)', Helsinki, Finland (accepted for publication) (2013)
Exploiting Structural Consistencies with Stacked Conditional Random Fields
Kluegl, P.; Toepfer, M.; Lemmerich, F.; Hotho, A. & Puppe, F.
Mathematical Methodologies in Pattern Recognition and Machine Learning Springer Proceedings in Mathematics & Statistics, 30() 111-125 (2013)
Conditional Random Fields (CRF) are popular methods for labeling unstructured or textual data. Like many machine learning approaches, these undirected graphical models assume the instances to be independently distributed. However, in real-world applications data is grouped in a natural way, e.g., by its creation context. The instances in each group often share additional structural consistencies. This paper proposes a domain-independent method for exploiting these consistencies by combining two CRFs in a stacked learning framework. We apply rule learning collectively on the predictions of an initial CRF for one context to acquire descriptions of its specific properties. Then, we utilize these descriptions as dynamic and high quality features in an additional (stacked) CRF. The presented approach is evaluated with a real-world dataset for the segmentation of references and achieves a significant reduction of the labeling error.
Large-scale Multi-label Learning with Missing Labels
Yu, H.-F.; Jain, P.; Kar, P. & Dhillon, I. S.
(2013) [pdf]
The multi-label classification problem has generated significant interest in
cent years. However, existing approaches do not adequately address two key
allenges: (a) the ability to tackle problems with a large number (say
llions) of labels, and (b) the ability to handle data with missing labels. In
is paper, we directly address both these problems by studying the multi-label
oblem in a generic empirical risk minimization (ERM) framework. Our
amework, despite being simple, is surprisingly able to encompass several
cent label-compression based methods which can be derived as special cases of
r method. To optimize the ERM problem, we develop techniques that exploit the
ructure of specific loss functions - such as the squared loss function - to
fer efficient algorithms. We further show that our learning framework admits
rmal excess risk bounds even in the presence of missing labels. Our risk
unds are tight and demonstrate better generalization performance for low-rank
omoting trace-norm regularization when compared to (rank insensitive)
obenius norm regularization. Finally, we present extensive empirical results
a variety of benchmark datasets and show that our methods perform
gnificantly better than existing label compression based methods and can
ale up to very large datasets such as the Wikipedia dataset.
Virtual Learning Communities: Success Factors and Challenges
Wegener, R. & Leimeister, J. M.
International Journal of Technology Enhanced Learning (IJTEL), 4(5/6) 383 - 397 (2012)
An analysis of single-layer networks in unsupervised feature learning
Coates, A.; Lee, H. & Ng, A.
Gordon, G.; Dunson, D. & Dudík, M., ed., 'Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics', 15(), JMLR Workshop and Conference Proceedings, JMLR W&CP, 215-223 (2011) [pdf]
A great deal of research has focused on algorithms for learning features from unlabeled data. Indeed, much progress has been made on benchmark datasets like NORB and CIFAR-10 by employing increasingly complex unsupervised learning algorithms and deep models. In this paper, however, we show that several simple factors, such as the number of hidden nodes in the model, may be more important to achieving high performance than the learning algorithm or the depth of the model. Specifically, we will apply several off-the-shelf feature learning algorithms (sparse auto-encoders, sparse RBMs, K-means clustering, and Gaussian mixtures) to CIFAR-10, NORB, and STL datasets using only single-layer networks. We then present a detailed analysis of the effect of changes in the model setup: the receptive field size, number of hidden nodes (features), the step-size ("stride") between extracted features, and the effect of whitening. Our results show that large numbers of hidden nodes and dense feature extraction are critical to achieving high performance - so critical, in fact, that when these parameters are pushed to their limits, we achieve state-of-the-art performance on both CIFAR-10 and NORB using only a single layer of features. More surprisingly, our best performance is based on K-means clustering, which is extremely fast, has no hyper-parameters to tune beyond the model structure itself, and is very easy to implement. Despite the simplicity of our system, we achieve accuracy beyond all previously published results on the CIFAR-10 and NORB datasets (79.6% and 97.2% respectively).
Text Detection and Character Recognition in Scene Images with Unsupervised Feature Learning
Coates, A.; Carpenter, B.; Case, C.; Satheesh, S.; Suresh, B.; Wang, T.; Wu, D. & Ng, A.
, 'International Conference on Document Analysis and Recognition (ICDAR)', [10.1109/ICDAR.2011.95], 440-445 (2011) [pdf]
Reading text from photographs is a challenging problem that has received a significant amount of attention. Two key components of most systems are (i) text detection from images and (ii) character recognition, and many recent methods have been proposed to design better feature representations and models for both. In this paper, we apply methods recently developed in machine learning -- specifically, large-scale algorithms for learning the features automatically from unlabeled data -- and show that they allow us to construct highly effective classifiers for both detection and recognition to be used in a high accuracy end-to-end system.
Toward an Architecture for Never-Ending Language Learning
Carlson, A.; Betteridge, J.; Kisiel, B.; Settles, B.; Jr., E. H. & Mitchell, T.
, 'Proceedings of the Conference on Artificial Intelligence (AAAI)', AAAI Press, 1306-1313 (2010)
Dynamic Auto-Encoders for Semantic Indexing
Mirowski, P.; Ranzato, M. & LeCun, Y.
of the NIPS 2010 Workshop on Deep Learning, P., ed. (2010) [pdf]
Machine learning
Mitchell, T. M.
2010, McGraw-Hill, New York, NY [u.a. [pdf]
Learning Concept Hierarchies from Text Corpora using Formal Concept Analysis
Cimiano, P.; Hotho, A. & Staab, S.
Journal on Artificial Intelligence Research, 24() 305-339 (2005) [pdf]
Random Forests
Breiman, L.
Machine Learning, 45(1) 5-32 (2001) [pdf]
Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the forest becomes large. The generalization error of a forest of tree classifiers depends on the strength of the individual trees in the forest and the correlation between them. Using a random selection of features to split each node yields error rates that compare favorably to
Making Large-Scale SVM Learning Practical
Joachims, T.
Schölkopf, B.; Burges, C. J. & Smola, A., ed., 'Advances in Kernel Methods - Support Vector Learning', MIT Press, Cambridge, MA, USA (1999)