Zusammenfassung

Applications using machine learning (ML), such as highly autonomous driving, depend highly on the performance of the ML model. The data amount and quality used for model training and validation are crucial. If the model cannot detect and interpret a new, rare, or perhaps dangerous situation, often referred to as a corner case, we will likely blame the data for not being good enough or too small in number. However, the implemented ML model and its associated architecture also influence the behavior. Therefore, the occurrence of prediction errors resulting from the ML model itself is not surprising. This work addresses a corner case definition from an ML model's perspective to determine which aspects must be considered. To achieve this goal, we present an overview of properties for corner cases that are beneficial for the description, explanation, reproduction, or synthetic generation of corner cases. To define ML corner cases, we review different considerations in the literature and summarize them in a general description and mathematical formulation, whereby the expected relevance-weighted loss is the key to distinguishing corner cases from common data. Moreover, we show how to operationalize the corner case characteristics to determine the value of a corner case. To conclude, we present the extended taxonomy for ML corner cases by adding the input, model, and deployment levels, considering the influence of the corner case properties.

Links und Ressourcen

URL:
BibTeX-Schlüssel:
heidecker2023corner
Suchen auf:

Kommentare und Rezensionen  
(0)

Es gibt bisher keine Rezension oder Kommentar. Sie können eine schreiben!

Tags


Zitieren Sie diese Publikation