%0 %0 Journal Article %A Heidecker, Florian; El-Khateeb, Ahmad; Bieshaar, Maarten & Sick, Bernhard %D 2024 %T Criteria for Uncertainty-based Corner Cases Detection in Instance Segmentation %E %B arXiv e-prints %C %I %V %6 %N %P arXiv:2404.11266 %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 %3 article %4 %# %$ %F heidecker2024criteria %K imported, itegpub, isac-www %X The operating environment of a highly automated vehicle is subject to change, e.g., weather, illumination, or the scenario containing different objects and other participants in which the highly automated vehicle has to navigate its passengers safely. These situations must be considered when developing and validating highly automated driving functions. This already poses a problem for training and evaluating deep learning models because without the costly labeling of thousands of recordings, not knowing whether the data contains relevant, interesting data for further model training, it is a guess under which conditions and situations the model performs poorly. For this purpose, we present corner case criteria based on the predictive uncertainty. With our corner case criteria, we are able to detect uncertainty-based corner cases of an object instance segmentation model without relying on ground truth (GT) data. We evaluated each corner case criterion using the COCO and the NuImages dataset to analyze the potential of our approach. We also provide a corner case decision function that allows us to distinguish each object into True Positive (TP), localization and/or classification corner case, or False Positive (FP). We also present our first results of an iterative training cycle that outperforms the baseline and where the data added to the training dataset is selected based on the corner case decision function. %Z %U https://arxiv.org/abs/2404.11266 %+ %^ %0 %0 Conference Proceedings %A Huang, Zhixin; Nivarthi, Chandana Priya; Gruhl, Christian & Sick, Bernhard %D 2024 %T Spatial-Temporal Attention Graph Neural Network with Uncertainty Estimation for Remaining Useful Life Prediction %E %B International Joint Conference on Neural Networks (IJCNN) %C %I IEEE %V %6 %N %P %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 %3 inproceedings %4 %# %$ %F huang2024spatial %K imported, itegpub, isac-www, Graph-Neural-Network, Remaining-Useful-Life-Prediction, Uncertainty-Estimation, Spatio-Temporal-Attention %X In the increasingly complex industrial system health management domain, accurate prediction of remaining useful life plays an essential role. This paper analyzes the methods to improve the predictive performance of remaining useful life from three aspects: optimizing model structures, augmenting uncertainty estimation in predictions, and transitioning normalization methods. Based on our analysis, we propose a novel model, the Uncertainty Spatial-Temporal Attention Graph Neural Network (USTAGNN), which consists of three primary components: sensor graph construction, a spatio-temporal feature extractor, and a probabilistic prediction module. The feature extractor leverages graph neural networks and temporal convolutional networks as a foundation to extract spatial and temporal features, further enhanced by attention mechanisms, spectral normalization, and residual connections to bolster its distance awareness. Following extensive experimental comparisons, we utilized the parameter-driven dynamic adjacency matrix for sensor graph construction and the deep kernel Gaussian process for precise uncertainty estimation. USTAGNN tries to resolve issues not thoroughly addressed in existing research, such as comparative analyses of sensor graph construction methods, accurate uncertainty estimation, and the model’s generalization under different preprocessing conditions. The proposed model demonstrated state-of-the-art performance on various subsets of the C-MAPSS dataset, achieving up to a 35.9% improvement in prediction score. %Z (accepted) %U %+ %^ %0 %0 Conference Proceedings %A Nivarthi, Chandana Priya; Huang, Zhixin; Gruhl, Christian & Sick, Bernhard %D 2024 %T Multi-Task Representation Learning with Temporal Attention for Zero-Shot Time Series Anomaly Detection %E %B International Joint Conference on Neural Networks (IJCNN) %C %I IEEE %V %6 %N %P %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 %3 inproceedings %4 %# %$ %F nivarthi2024multi %K imported, itegpub, isac-www %X Ensuring the reliability of critical industrial systems across various sectors is crucial. It is essential to detect deviations from regular behaviour to mitigate disruptions and preserve infrastructure integrity. However, accurately labelling anomaly datasets is challenging due to their rarity and manual annotation subjectivity. The conventional approach of training separate models for each dataset entity further complicates model development. This paper presents a novel Multi-task Learning framework combining LSTM Autoencoder with temporal attention mechanism (MTL-LATAM) for effective time series anomaly detection. Multitask learning models improve adaptability and generalizability, leading to reduced runtime and compute power while supporting zero-shot evaluation. These models offer flexibility in detecting emerging anomalies. Additionally, we introduce a dynamic thresholding mechanism to incorporate temporal context for anomaly detection and provide visualizations of attention weights to enhance interpretability. The study compares MTL- LATAM, with other multi-task models, evaluates multi-task versus single-task models and assesses the performance of the proposed frame- work in zero-shot learning scenarios. The findings indicate MTL- LATAM’s effectiveness across real-world and open-source datasets, achieving 95% and 97% task synergy. The results underscore the superior performance of multi-task models in zero-shot tasks compared to individual models trained exclusively on their respective datasets. %Z (accepted) %U %+ %^ %0 %0 Book %A %D 2024 %T Organic Computing -- Doctoral Dissertation Colloquium 2023 %E Tomforde, Sven & Krupitzer, Christian %B Intelligent Embedded Systems %C %I kassel university press %V 26 %6 %N %P %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 %3 book %4 %# %$ %F tomforde2024organic %K imported, itegpub, isac-www %X %Z %U %+ %^ %0 %0 Journal Article %A Pham, Tuan; Kottke, Daniel; Krempl, Georg & Sick, Bernhard %D 2022 %T Stream-based active learning for sliding windows under the influence of verification latency %E %B Machine Learning %C %I Springer %V 111 %6 %N 6 %P 2011--2036 %& %Y %S %7 %8 %9 %? %! %Z %@ %( %) %* %L %M %1 %2 %3 article %4 %# %$ %F pham2022stream %K imported, itegpub, isac-www %X Stream-based active learning (AL) strategies minimize the labeling effort by querying labels that improve the classifier's performance the most. So far, these strategies neglect the fact that an oracle or expert requires time to provide a queried label. We show that existing AL methods deteriorate or even fail under the influence of such verification latency. The problem with these methods is that they estimate a label's utility on the currently available labeled data. However, when this label would arrive, some of the current data may have gotten outdated and new labels have arrived. In this article, we propose to simulate the available data at the time when the label would arrive. Therefore, our method Forgetting and Simulating (FS) forgets outdated information and simulates the delayed labels to get more realistic utility estimates. We assume to know the label's arrival date a priori and the classifier's training data to be bounded by a sliding window. Our extensive experiments show that FS improves stream-based AL strategies in settings with both, constant and variable verification latency. %Z %U %+ %^