QuickSearch:   Number of matching entries: 0.

AuthorTitleYearJournal/ProceedingsReftypeDOI/URL
Heidecker, F., El-Khateeb, A., Bieshaar, M. & Sick, B. Criteria for Uncertainty-based Corner Cases Detection in Instance Segmentation 2024 arXiv e-prints   article URL  
Abstract: The operating environment of a highly automated vehicle is subject to change, e.g., weather, illumination, or the scenario containing different objects and other participants in which the highly automated vehicle has to navigate its passengers safely. These situations must be considered when developing and validating highly automated driving functions. This already poses a problem for training and evaluating deep learning models because without the costly labeling of thousands of recordings, not knowing whether the data contains relevant, interesting data for further model training, it is a guess under which conditions and situations the model performs poorly. For this purpose, we present corner case criteria based on the predictive uncertainty. With our corner case criteria, we are able to detect uncertainty-based corner cases of an object instance segmentation model without relying on ground truth (GT) data. We evaluated each corner case criterion using the COCO and the NuImages dataset to analyze the potential of our approach. We also provide a corner case decision function that allows us to distinguish each object into True Positive (TP), localization and/or classification corner case, or False Positive (FP). We also present our first results of an iterative training cycle that outperforms the baseline and where the data added to the training dataset is selected based on the corner case decision function.
BibTeX:
@article{heidecker2024criteria,
  author = {Heidecker, Florian and El-Khateeb, Ahmad and Bieshaar, Maarten and Sick, Bernhard},
  title = {Criteria for Uncertainty-based Corner Cases Detection in Instance Segmentation},
  journal = {arXiv e-prints},
  year = {2024},
  pages = {arXiv:2404.11266},
  url = {https://arxiv.org/abs/2404.11266}
}
Huang, Z., Nivarthi, C. P., Gruhl, C. & Sick, B. Spatial-Temporal Attention Graph Neural Network with Uncertainty Estimation for Remaining Useful Life Prediction 2024 International Joint Conference on Neural Networks (IJCNN)   inproceedings  
Abstract: In the increasingly complex industrial system health management domain, accurate prediction of remaining useful life plays an essential role. This paper analyzes the methods to improve the predictive performance of remaining useful life from three aspects: optimizing model structures, augmenting uncertainty estimation in predictions, and transitioning normalization methods. Based on our analysis, we propose a novel model, the Uncertainty Spatial-Temporal Attention Graph Neural Network (USTAGNN), which consists of three primary components: sensor graph construction, a spatio-temporal feature extractor, and a probabilistic prediction module. The feature extractor leverages graph neural networks and temporal convolutional networks as a foundation to extract spatial and temporal features, further enhanced by attention mechanisms, spectral normalization, and residual connections to bolster its distance awareness. Following extensive experimental comparisons, we utilized the parameter-driven dynamic adjacency matrix for sensor graph construction and the deep kernel Gaussian process for precise uncertainty estimation. USTAGNN tries to resolve issues not thoroughly addressed in existing research, such as comparative analyses of sensor graph construction methods, accurate uncertainty estimation, and the model’s generalization under different preprocessing conditions. The proposed model demonstrated state-of-the-art performance on various subsets of the C-MAPSS dataset, achieving up to a 35.9% improvement in prediction score.
BibTeX:
@inproceedings{huang2024spatial,
  author = {Huang, Zhixin and Nivarthi, Chandana Priya and Gruhl, Christian and Sick, Bernhard},
  title = {Spatial-Temporal Attention Graph Neural Network with Uncertainty Estimation for Remaining Useful Life Prediction},
  booktitle = {International Joint Conference on Neural Networks (IJCNN)},
  publisher = {IEEE},
  year = {2024},
  note = {(accepted)}
}
Nivarthi, C. P., Huang, Z., Gruhl, C. & Sick, B. Multi-Task Representation Learning with Temporal Attention for Zero-Shot Time Series Anomaly Detection 2024 International Joint Conference on Neural Networks (IJCNN)   inproceedings  
Abstract: Ensuring the reliability of critical industrial systems
ross various sectors is crucial. It is essential to detect deviations
om regular behaviour to mitigate disruptions and preserve
frastructure integrity. However, accurately labelling anomaly
tasets is challenging due to their rarity and manual annotation
bjectivity. The conventional approach of training separate models
r each dataset entity further complicates model development.
is paper presents a novel Multi-task Learning framework
mbining LSTM Autoencoder with temporal attention mechanism
TL-LATAM) for effective time series anomaly detection. Multitask learning models improve adaptability and generalizability,
ading to reduced runtime and compute power while supporting
ro-shot evaluation. These models offer flexibility in detecting
erging anomalies. Additionally, we introduce a dynamic thresholding mechanism to incorporate temporal context for anomaly
tection and provide visualizations of attention weights to enhance
terpretability. The study compares MTL- LATAM, with other
lti-task models, evaluates multi-task versus single-task models
d assesses the performance of the proposed frame- work in
ro-shot learning scenarios. The findings indicate MTL- LATAM’s
fectiveness across real-world and open-source datasets, achieving
% and 97% task synergy. The results underscore the superior
rformance of multi-task models in zero-shot tasks compared to
dividual models trained exclusively on their respective datasets.
BibTeX:
@inproceedings{nivarthi2024multi,
  author = {Nivarthi, Chandana Priya and Huang, Zhixin and Gruhl, Christian and Sick, Bernhard},
  title = {Multi-Task Representation Learning with Temporal Attention for Zero-Shot Time Series Anomaly Detection},
  booktitle = {International Joint Conference on Neural Networks (IJCNN)},
  publisher = {IEEE},
  year = {2024},
  note = {(accepted)}
}
Organic Computing -- Doctoral Dissertation Colloquium 2023 2024   book DOI  
BibTeX:
@book{tomforde2024organic,,
  title = {Organic Computing -- Doctoral Dissertation Colloquium 2023},
  publisher = {kassel university press},
  year = {2024},
  volume = {26},
  doi = {http://dx.doi.org/10.17170/kobra-202402269661}
}
Pham, T., Kottke, D., Krempl, G. & Sick, B. Stream-based active learning for sliding windows under the influence of verification latency 2022 Machine Learning   article DOI  
Abstract: Stream-based active learning (AL) strategies minimize the labeling effort by querying labels that improve the classifier's performance the most. So far, these strategies neglect the fact that an oracle or expert requires time to provide a queried label. We show that existing AL methods deteriorate or even fail under the influence of such verification latency. The problem with these methods is that they estimate a label's utility on the currently available labeled data. However, when this label would arrive, some of the current data may have gotten outdated and new labels have arrived. In this article, we propose to simulate the available data at the time when the label would arrive. Therefore, our method Forgetting and Simulating (FS) forgets outdated information and simulates the delayed labels to get more realistic utility estimates. We assume to know the label's arrival date a priori and the classifier's training data to be bounded by a sliding window. Our extensive experiments show that FS improves stream-based AL strategies in settings with both, constant and variable verification latency.
BibTeX:
@article{pham2022stream,
  author = {Pham, Tuan and Kottke, Daniel and Krempl, Georg and Sick, Bernhard},
  title = {Stream-based active learning for sliding windows under the influence of verification latency},
  journal = {Machine Learning},
  publisher = {Springer},
  year = {2022},
  volume = {111},
  number = {6},
  pages = {2011--2036},
  doi = {doi.org/10.1007/s10994-021-06099-z}
}

Created by JabRef export filters on 02/05/2024 by the social publication management platform PUMA