QuickSearch:   Number of matching entries: 0.

Search Settings

    AuthorTitleYearJournal/ProceedingsReftypeDOI/URL
    Decke, J., Gruhl, C., Rauch, L. & Sick, B. DADO – Low-Cost Query Strategies for Deep Active Design Optimization 2023 International Conference on Machine Learning and Applications (ICMLA), pp. 1611-1618  inproceedings DOI  
    Abstract: In this work, we apply deep active learning to the field of design optimization to reduce the number of computationally expensive numerical simulations widely used in industry and engineering.
    are interested in optimizing the design of structural components, where a set of parameters describes the shape. If we can predict the performance based on these parameters and consider only the promising candidates for simulation, there is an enormous potential for saving computing power.
    present two query strategies for self-optimization to reduce the computational cost in multi-objective design optimization problems. Our proposed methodology provides an intuitive approach that is easy to apply, offers significant improvements over random sampling, and circumvents the need for uncertainty estimation.
    evaluate our strategies on a large dataset from the domain of fluid dynamics and introduce two new evaluation metrics to determine the model's performance.
    ndings from our evaluation highlights the effectiveness of our query strategies in accelerating design optimization.
    rthermore, the introduced method is easily transferable to other self-optimization problems in industry and engineering.
    BibTeX:
    @inproceedings{decke2023dado,
      author = {Decke, Jens and Gruhl, Christian and Rauch, Lukas and Sick, Bernhard},
      title = {DADO – Low-Cost Query Strategies for Deep Active Design Optimization},
      booktitle = {International Conference on Machine Learning and Applications (ICMLA)},
      publisher = {IEEE},
      year = {2023},
      pages = {1611--1618},
      doi = {http://dx.doi.org/10.1109/ICMLA58977.2023.00244}
    }
    
    Heidecker, F., Susetzky, T., Fuchs, E. & Sick, B. Context Information for Corner Case Detection in Highly Automated Driving 2023 IEEE International Conference on Intelligent Transportation Systems (ITSC), pp. 1522-1529  inproceedings DOI  
    Abstract: Context information provided along with a dataset can be very helpful for solving a problem because the additional knowledge is already available and does not need to be extracted. Moreover, the context indicates how diverse a dataset is, i.e., how many samples per context category are available to train and test machine learning (ML) models. In this article, we present context annotations for the BDD100k image dataset. The annotations comprise, for instance, information about daytime, road condition (dry/wet), and dirt on the windshield. Sometimes, no or only little data are available for unique or rare combinations of these context attributes. However, data that matches these context conditions is crucial when discussing corner cases: Firstly, most ML models, e.g., object detectors, are not trained on such data, which leads to the assumption that they will perform poorly in those situations. Secondly, data containing corner cases are required for validating ML models. With this in mind, separate ML models dedicated to context detection are useful for expanding the training set with additional data of special interest, such as corner cases.
    BibTeX:
    @inproceedings{heidecker2023context,
      author = {Heidecker, Florian and Susetzky, Tobias and Fuchs, Erich and Sick, Bernhard},
      title = {Context Information for Corner Case Detection in Highly Automated Driving},
      booktitle = {IEEE International Conference on Intelligent Transportation Systems (ITSC)},
      publisher = {IEEE},
      year = {2023},
      pages = {1522--1529},
      doi = {http://dx.doi.org/10.1109/ITSC57777.2023.10422414}
    }
    
    Lachi, V., Moallemy-Oureh, A., Roth, A. & Welke, P. Graph Pooling Provably Improves Expressivity 2023 Workshop on New Frontiers in Graph Learning, NeurIPS, pp. 1-7  inproceedings URL 
    Abstract: In the domain of graph neural networks (GNNs), pooling operators are fundamental to reduce the size of the graph by simplifying graph structures and vertex features. Recent advances have shown that well-designed pooling operators, coupled with message-passing layers, can endow hierarchical GNNs with an expressive power regarding the graph isomorphism test that is equal to the Weisfeiler-Leman test. However, the ability of hierarchical GNNs to increase expressive power by utilizing graph coarsening was not yet explored. This results in uncertainties about the benefits of pooling operators and a lack of sufficient properties to guide their design. In this work, we identify conditions for pooling operators to generate WL-distinguishable coarsened graphs from originally WL-indistinguishable but non-isomorphic graphs. Our conditions are versatile and can be tailored to specific tasks and data characteristics, offering a promising avenue for further research.
    BibTeX:
    @inproceedings{lachi2023graph,
      author = {Lachi, Veronica and Moallemy-Oureh, Alice and Roth, Andreas and Welke, Pascal},
      title = {Graph Pooling Provably Improves Expressivity},
      booktitle = {Workshop on New Frontiers in Graph Learning, NeurIPS},
      year = {2023},
      pages = {1--7},
      url = {https://openreview.net/forum?id=lR5NYB9zrv}
    }
    
    Moallemy-Oureh, A., Beddar-Wiesing, S., Nather, R. & Thomas, J. Marked Neural Spatio-Temporal Point Process Involving a Dynamic Graph Neural Network 2023 Workshop on Temporal Graph Learning (TGL), NeurIPS, pp. 1-7  inproceedings URL 
    Abstract: Spatio-Temporal Point Processes (STPPs) have recently become increasingly interesting for learning dynamic graph data since many scientific fields, ranging from mathematics, biology, social sciences, and physics to computer science, are naturally related and dynamic. While training Recurrent Neural Networks and solving PDEs for representing temporal data is expensive, TPPs were a good alternative. The drawback is that constructing an appropriate TPP for modeling temporal data requires the assumption of a particular temporal behavior of the data. To overcome this problem, Neural TPPs have been developed that enable learning of the parameters of the TPP. However, the research is relatively young for modeling dynamic graphs, and only a few TPPs have been proposed to handle edge-dynamic graphs. To allow for learning on a fully dynamic graph, we propose the first Marked Neural Spatio-Temporal Point Process (MNSTPP) that leverages a Dynamic Graph Neural Network to learn Spatio-TPPs to model and predict any event in a graph stream.
    addition, our model can be updated efficiently by considering single events for local retraining.
    BibTeX:
    @inproceedings{moallemyoureh2023marked,
      author = {Moallemy-Oureh, Alice and Beddar-Wiesing, Silvia and Nather, Rüdiger and Thomas, Josephine},
      title = {Marked Neural Spatio-Temporal Point Process Involving a Dynamic Graph Neural Network},
      booktitle = {Workshop on Temporal Graph Learning (TGL), NeurIPS},
      year = {2023},
      pages = {1--7},
      url = {https://openreview.net/forum?id=QJx3Cmddsy}
    }
    
    Nivarthi, C.P. & Sick, B. Towards Few-Shot Time Series Anomaly Detection with Temporal Attention and Dynamic Thresholding 2023 International Conference on Machine Learning and Applications (ICMLA), pp. 1444-1450  inproceedings DOI  
    Abstract: Anomaly detection plays a pivotal role in diverse realworld applications such as cybersecurity, fault detection, network
    nitoring, predictive maintenance, and highly automated driving. However, obtaining labeled anomalous data can be a formidable
    allenge, especially when anomalies exhibit temporal evolution. This paper introduces LATAM (Long short-term memory Autoencoder with Temporal Attention Mechanism) for few-shot anomaly detection, with the aim of enhancing detection performance in scenarios with limited labeled anomaly data. LATAM effectively captures temporal dependencies and emphasizes significant patterns in multivariate time series data. In our investigation, we
    mprehensively evaluate LATAM against other anomaly detection models, particularly assessing its capability in few-shot learning
    enarios where we have minimal examples from the normal class and none from the anomalous class in the training data. Our
    perimental results, derived from real-world photovoltaic inverter data, highlight LATAM’s superiority, showcasing a substantial
    % mean F1 score improvement, even when trained on a mere two-week dataset. Furthermore, LATAM demonstrates remarkable
    sults on the open-source SWaT dataset, achieving a 12% boost in accuracy with only two days of training data. Moreover, we
    troduce a simple yet effective dynamic thresholding mechanism, further enhancing the anomaly detection capabilities of LATAM.
    is underscores LATAM’s efficacy in addressing the challenges posed by limited labeled anomalies in practical scenarios and it
    oves valuable for downstream tasks involving temporal representation and time series prediction, extending its utility beyond
    omaly detection applications.
    BibTeX:
    @inproceedings{nivarthi2023towards,
      author = {Nivarthi, Chandana Priya and Sick, Bernhard},
      title = {Towards Few-Shot Time Series Anomaly Detection with Temporal Attention and Dynamic Thresholding},
      booktitle = {International Conference on Machine Learning and Applications (ICMLA)},
      publisher = {IEEE},
      year = {2023},
      pages = {1444--1450},
      doi = {http://dx.doi.org/10.1109/ICMLA58977.2023.00218}
    }
    

    Created by JabRef on 28/03/2024.