Convolutional neural networks (CNNs) have seen spectacular advances over the past century, particularly improving the state-of-the-art in computer vision tasks. Semantic segmentation, an image classification at pixel-level, is an essential step in understanding a vehicle's surroundings via camera images for autonomous driving. While CNNs keep becoming more and more powerful predictive models, they still often fail if an input is outside of their learned concepts. The non-detection of objects in street scenes, including out-of-distribution (OOD) objects, poses serious hazard and may cause public harm. Therefore, methods determining when a model has failed are crucial in order to ensure a safe and responsible usage of CNNs in real-world applications. In this work we present a method for the detection of OOD objects. We extend work from image classification to more complex semantic segmentation. Our approach is based on pixel-wise entropy derived from the CNN's probabilistic Softmax output. This dispersion measure can be understood as prediction uncertainty indicating a failure per pixel. Paired with methods from image processing, we determine image regions in which an OOD object might be present but overlooked by the CNN. We show the course of our method's development performed on the semantic segmentation Lost and Found dataset that was inferred using the state-of-the-art CNN DeeplabV3+ with Xception65 network backbone. We provide an in-depth statistical evaluation and discuss strength, but also weakness, of our presented method. Additionally, we perform a brief analysis of the topic from the point of view of safety engineering, including a critical evaluation why common standards like ISO 26262 cannot be applied.