TY - CONF AU - Nivarthi, Chandana Priya AU - Sick, Bernhard A2 - T1 - Towards Few-Shot Time Series Anomaly Detection with Temporal Attention and Dynamic Thresholding T2 - International Conference on Machine Learning and Applications (ICMLA) PB - IEEE CY - PY - 2023/ M2 - VL - IS - SP - 1444 EP - 1450 UR - M3 - 10.1109/ICMLA58977.2023.00218 KW - imported KW - itegpub KW - isac-www KW - few-shot KW - learning KW - anomaly KW - detection KW - temporal KW - attention KW - dynamic KW - thresholding L1 - SN - N1 - N1 - AB - Anomaly detection plays a pivotal role in diverse realworld applications such as cybersecurity, fault detection, network

monitoring, predictive maintenance, and highly automated driving. However, obtaining labeled anomalous data can be a formidable

challenge, especially when anomalies exhibit temporal evolution. This paper introduces LATAM (Long short-term memory Autoencoder with Temporal Attention Mechanism) for few-shot anomaly detection, with the aim of enhancing detection performance in scenarios with limited labeled anomaly data. LATAM effectively captures temporal dependencies and emphasizes significant patterns in multivariate time series data. In our investigation, we

comprehensively evaluate LATAM against other anomaly detection models, particularly assessing its capability in few-shot learning

scenarios where we have minimal examples from the normal class and none from the anomalous class in the training data. Our

experimental results, derived from real-world photovoltaic inverter data, highlight LATAM’s superiority, showcasing a substantial

27% mean F1 score improvement, even when trained on a mere two-week dataset. Furthermore, LATAM demonstrates remarkable

results on the open-source SWaT dataset, achieving a 12% boost in accuracy with only two days of training data. Moreover, we

introduce a simple yet effective dynamic thresholding mechanism, further enhancing the anomaly detection capabilities of LATAM.

This underscores LATAM’s efficacy in addressing the challenges posed by limited labeled anomalies in practical scenarios and it

proves valuable for downstream tasks involving temporal representation and time series prediction, extending its utility beyond

anomaly detection applications. ER - TY - CONF AU - Moallemy-Oureh, Alice AU - Beddar-Wiesing, Silvia AU - Nather, Rüdiger AU - Thomas, Josephine A2 - T1 - Marked Neural Spatio-Temporal Point Process Involving a Dynamic Graph Neural Network T2 - Workshop on Temporal Graph Learning (TGL), NeurIPS PB - CY - PY - 2023/ M2 - VL - IS - SP - 1 EP - 7 UR - https://openreview.net/forum?id=QJx3Cmddsy M3 - KW - imported KW - itegpub KW - isac-www L1 - SN - N1 - N1 - AB - Spatio-Temporal Point Processes (STPPs) have recently become increasingly interesting for learning dynamic graph data since many scientific fields, ranging from mathematics, biology, social sciences, and physics to computer science, are naturally related and dynamic. While training Recurrent Neural Networks and solving PDEs for representing temporal data is expensive, TPPs were a good alternative. The drawback is that constructing an appropriate TPP for modeling temporal data requires the assumption of a particular temporal behavior of the data. To overcome this problem, Neural TPPs have been developed that enable learning of the parameters of the TPP. However, the research is relatively young for modeling dynamic graphs, and only a few TPPs have been proposed to handle edge-dynamic graphs. To allow for learning on a fully dynamic graph, we propose the first Marked Neural Spatio-Temporal Point Process (MNSTPP) that leverages a Dynamic Graph Neural Network to learn Spatio-TPPs to model and predict any event in a graph stream.

In addition, our model can be updated efficiently by considering single events for local retraining. ER - TY - CONF AU - Lachi, Veronica AU - Moallemy-Oureh, Alice AU - Roth, Andreas AU - Welke, Pascal A2 - T1 - Graph Pooling Provably Improves Expressivity T2 - Workshop on New Frontiers in Graph Learning, NeurIPS PB - CY - PY - 2023/ M2 - VL - IS - SP - 1 EP - 7 UR - https://openreview.net/forum?id=lR5NYB9zrv M3 - KW - imported KW - itegpub KW - isac-www L1 - SN - N1 - N1 - AB - In the domain of graph neural networks (GNNs), pooling operators are fundamental to reduce the size of the graph by simplifying graph structures and vertex features. Recent advances have shown that well-designed pooling operators, coupled with message-passing layers, can endow hierarchical GNNs with an expressive power regarding the graph isomorphism test that is equal to the Weisfeiler-Leman test. However, the ability of hierarchical GNNs to increase expressive power by utilizing graph coarsening was not yet explored. This results in uncertainties about the benefits of pooling operators and a lack of sufficient properties to guide their design. In this work, we identify conditions for pooling operators to generate WL-distinguishable coarsened graphs from originally WL-indistinguishable but non-isomorphic graphs. Our conditions are versatile and can be tailored to specific tasks and data characteristics, offering a promising avenue for further research. ER - TY - CONF AU - Decke, Jens AU - Gruhl, Christian AU - Rauch, Lukas AU - Sick, Bernhard A2 - T1 - DADO – Low-Cost Query Strategies for Deep Active Design Optimization T2 - International Conference on Machine Learning and Applications (ICMLA) PB - IEEE CY - PY - 2023/ M2 - VL - IS - SP - 1611 EP - 1618 UR - M3 - 10.1109/ICMLA58977.2023.00244 KW - imported KW - itegpub KW - isac-www KW - Self-Optimization KW - Self-Supervised-Learning KW - Design-Optimization KW - Active-Learning KW - Numerical-Simulation L1 - SN - N1 - N1 - AB - In this work, we apply deep active learning to the field of design optimization to reduce the number of computationally expensive numerical simulations widely used in industry and engineering.

We are interested in optimizing the design of structural components, where a set of parameters describes the shape. If we can predict the performance based on these parameters and consider only the promising candidates for simulation, there is an enormous potential for saving computing power.

We present two query strategies for self-optimization to reduce the computational cost in multi-objective design optimization problems. Our proposed methodology provides an intuitive approach that is easy to apply, offers significant improvements over random sampling, and circumvents the need for uncertainty estimation.

We evaluate our strategies on a large dataset from the domain of fluid dynamics and introduce two new evaluation metrics to determine the model's performance.

Findings from our evaluation highlights the effectiveness of our query strategies in accelerating design optimization.

Furthermore, the introduced method is easily transferable to other self-optimization problems in industry and engineering. ER - TY - CONF AU - Heidecker, Florian AU - Susetzky, Tobias AU - Fuchs, Erich AU - Sick, Bernhard A2 - T1 - Context Information for Corner Case Detection in Highly Automated Driving T2 - IEEE International Conference on Intelligent Transportation Systems (ITSC) PB - IEEE CY - PY - 2023/ M2 - VL - IS - SP - 1522 EP - 1529 UR - M3 - 10.1109/ITSC57777.2023.10422414 KW - imported KW - itegpub KW - isac-www L1 - SN - N1 - N1 - AB - Context information provided along with a dataset can be very helpful for solving a problem because the additional knowledge is already available and does not need to be extracted. Moreover, the context indicates how diverse a dataset is, i.e., how many samples per context category are available to train and test machine learning (ML) models. In this article, we present context annotations for the BDD100k image dataset. The annotations comprise, for instance, information about daytime, road condition (dry/wet), and dirt on the windshield. Sometimes, no or only little data are available for unique or rare combinations of these context attributes. However, data that matches these context conditions is crucial when discussing corner cases: Firstly, most ML models, e.g., object detectors, are not trained on such data, which leads to the assumption that they will perform poorly in those situations. Secondly, data containing corner cases are required for validating ML models. With this in mind, separate ML models dedicated to context detection are useful for expanding the training set with additional data of special interest, such as corner cases. ER -