%0 %0 Conference Proceedings %A Nivarthi, Chandana Priya & Sick, Bernhard %D 2023 %T Towards Few-Shot Time Series Anomaly Detection with Temporal Attention and Dynamic Thresholding %E %B International Conference on Machine Learning and Applications (ICMLA) %C %I IEEE %V %6 %N %P 1444--1450 %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 %3 inproceedings %4 %# %$ %F nivarthi2023towards %K imported, itegpub, isac-www, few-shot, learning, anomaly, detection, temporal, attention, dynamic, thresholding %X Anomaly detection plays a pivotal role in diverse realworld applications such as cybersecurity, fault detection, network monitoring, predictive maintenance, and highly automated driving. However, obtaining labeled anomalous data can be a formidable challenge, especially when anomalies exhibit temporal evolution. This paper introduces LATAM (Long short-term memory Autoencoder with Temporal Attention Mechanism) for few-shot anomaly detection, with the aim of enhancing detection performance in scenarios with limited labeled anomaly data. LATAM effectively captures temporal dependencies and emphasizes significant patterns in multivariate time series data. In our investigation, we comprehensively evaluate LATAM against other anomaly detection models, particularly assessing its capability in few-shot learning scenarios where we have minimal examples from the normal class and none from the anomalous class in the training data. Our experimental results, derived from real-world photovoltaic inverter data, highlight LATAM’s superiority, showcasing a substantial 27% mean F1 score improvement, even when trained on a mere two-week dataset. Furthermore, LATAM demonstrates remarkable results on the open-source SWaT dataset, achieving a 12% boost in accuracy with only two days of training data. Moreover, we introduce a simple yet effective dynamic thresholding mechanism, further enhancing the anomaly detection capabilities of LATAM. This underscores LATAM’s efficacy in addressing the challenges posed by limited labeled anomalies in practical scenarios and it proves valuable for downstream tasks involving temporal representation and time series prediction, extending its utility beyond anomaly detection applications. %Z %U %+ %^ %0 %0 Conference Proceedings %A Moallemy-Oureh, Alice; Beddar-Wiesing, Silvia; Nather, Rüdiger & Thomas, Josephine %D 2023 %T Marked Neural Spatio-Temporal Point Process Involving a Dynamic Graph Neural Network %E %B Workshop on Temporal Graph Learning (TGL), NeurIPS %C %I %V %6 %N %P 1--7 %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 %3 inproceedings %4 %# %$ %F moallemyoureh2023marked %K imported, itegpub, isac-www %X Spatio-Temporal Point Processes (STPPs) have recently become increasingly interesting for learning dynamic graph data since many scientific fields, ranging from mathematics, biology, social sciences, and physics to computer science, are naturally related and dynamic. While training Recurrent Neural Networks and solving PDEs for representing temporal data is expensive, TPPs were a good alternative. The drawback is that constructing an appropriate TPP for modeling temporal data requires the assumption of a particular temporal behavior of the data. To overcome this problem, Neural TPPs have been developed that enable learning of the parameters of the TPP. However, the research is relatively young for modeling dynamic graphs, and only a few TPPs have been proposed to handle edge-dynamic graphs. To allow for learning on a fully dynamic graph, we propose the first Marked Neural Spatio-Temporal Point Process (MNSTPP) that leverages a Dynamic Graph Neural Network to learn Spatio-TPPs to model and predict any event in a graph stream. In addition, our model can be updated efficiently by considering single events for local retraining. %Z %U https://openreview.net/forum?id=QJx3Cmddsy %+ %^ %0 %0 Conference Proceedings %A Lachi, Veronica; Moallemy-Oureh, Alice; Roth, Andreas & Welke, Pascal %D 2023 %T Graph Pooling Provably Improves Expressivity %E %B Workshop on New Frontiers in Graph Learning, NeurIPS %C %I %V %6 %N %P 1--7 %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 %3 inproceedings %4 %# %$ %F lachi2023graph %K imported, itegpub, isac-www %X In the domain of graph neural networks (GNNs), pooling operators are fundamental to reduce the size of the graph by simplifying graph structures and vertex features. Recent advances have shown that well-designed pooling operators, coupled with message-passing layers, can endow hierarchical GNNs with an expressive power regarding the graph isomorphism test that is equal to the Weisfeiler-Leman test. However, the ability of hierarchical GNNs to increase expressive power by utilizing graph coarsening was not yet explored. This results in uncertainties about the benefits of pooling operators and a lack of sufficient properties to guide their design. In this work, we identify conditions for pooling operators to generate WL-distinguishable coarsened graphs from originally WL-indistinguishable but non-isomorphic graphs. Our conditions are versatile and can be tailored to specific tasks and data characteristics, offering a promising avenue for further research. %Z %U https://openreview.net/forum?id=lR5NYB9zrv %+ %^ %0 %0 Conference Proceedings %A Decke, Jens; Gruhl, Christian; Rauch, Lukas & Sick, Bernhard %D 2023 %T DADO – Low-Cost Query Strategies for Deep Active Design Optimization %E %B International Conference on Machine Learning and Applications (ICMLA) %C %I IEEE %V %6 %N %P 1611--1618 %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 %3 inproceedings %4 %# %$ %F decke2023dado %K imported, itegpub, isac-www, Self-Optimization, Self-Supervised-Learning, Design-Optimization, Active-Learning, Numerical-Simulation %X In this work, we apply deep active learning to the field of design optimization to reduce the number of computationally expensive numerical simulations widely used in industry and engineering. We are interested in optimizing the design of structural components, where a set of parameters describes the shape. If we can predict the performance based on these parameters and consider only the promising candidates for simulation, there is an enormous potential for saving computing power. We present two query strategies for self-optimization to reduce the computational cost in multi-objective design optimization problems. Our proposed methodology provides an intuitive approach that is easy to apply, offers significant improvements over random sampling, and circumvents the need for uncertainty estimation. We evaluate our strategies on a large dataset from the domain of fluid dynamics and introduce two new evaluation metrics to determine the model's performance. Findings from our evaluation highlights the effectiveness of our query strategies in accelerating design optimization. Furthermore, the introduced method is easily transferable to other self-optimization problems in industry and engineering. %Z %U %+ %^ %0 %0 Conference Proceedings %A Heidecker, Florian; Susetzky, Tobias; Fuchs, Erich & Sick, Bernhard %D 2023 %T Context Information for Corner Case Detection in Highly Automated Driving %E %B IEEE International Conference on Intelligent Transportation Systems (ITSC) %C %I IEEE %V %6 %N %P 1522--1529 %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 %3 inproceedings %4 %# %$ %F heidecker2023context %K imported, itegpub, isac-www %X Context information provided along with a dataset can be very helpful for solving a problem because the additional knowledge is already available and does not need to be extracted. Moreover, the context indicates how diverse a dataset is, i.e., how many samples per context category are available to train and test machine learning (ML) models. In this article, we present context annotations for the BDD100k image dataset. The annotations comprise, for instance, information about daytime, road condition (dry/wet), and dirt on the windshield. Sometimes, no or only little data are available for unique or rare combinations of these context attributes. However, data that matches these context conditions is crucial when discussing corner cases: Firstly, most ML models, e.g., object detectors, are not trained on such data, which leads to the assumption that they will perform poorly in those situations. Secondly, data containing corner cases are required for validating ML models. With this in mind, separate ML models dedicated to context detection are useful for expanding the training set with additional data of special interest, such as corner cases. %Z %U %+ %^