In safety-critical applications such as automated driving, perception errors may create an imminent risk to vulnerable road users (VRU). To mitigate the occurrence of unexpected and potentially dangerous situations, so-called corner cases, perception models are trained on a huge amount of data. However, the models are typically evaluated using task-agnostic metrics, which do not reflect the severity of safety-critical misdetections. Consequently, misdetections with particular relevance for the safe driving task should entail a more severe penalty during evaluation to pinpoint corner cases in large-scale datasets. In this work, we propose a novel metric IoUw that exploits relevance on the pixel level of the semantic segmentation output to extend the notion of the intersection over union (IoU) by emphasizing small areas of an image affected by corner cases. We (i) employ IoUw to measure the effect of pre-defined relevance criteria on the segmentation evaluation, and (ii) use the relevance-adapted IoUw to refine the identification of corner cases. In our experiments, we investigate vision-based relevance criteria and physical attributes as per-pixel criticality to factor in the imminent risk, showing that IoUw precisely accentuates the relevance of corner cases.