Dieses Bild zeigt Berit Schürrle

Berit Schürrle


Akademische Mitarbeiterin
Institut für Automatisierungstechnik und Softwaresysteme


+49 711 685 69188
+49 711 685 67302

Pfaffenwaldring 47
70550 Stuttgart
Raum: 3.251

Zeitschriften und Konferenzen:
  1. 2023

    1. B. Schürrle, P. Grimmeisen, J. Pfeiffer, T. Zimmermann, A. Morozov, und A. Wortmann, „Educating Future Software Engineers for Industrial Robotics“, 2023.
    2. B. Schürrle, V. Sankarappan, und A. Morozov, „SynthiCAD: Generation of Industrial Image Data Sets for Resilience Evaluation of Safety-Critical Classifiers“, 2023, S. 2199–2206.

Forschungsschwerpunkt: - Resilience Analysis of safety critical AI components

Beschreibung: - With the continuous advancement of Artificial Intelligence (AI), the areas of application for deep learning (DL) methods have grown tremendously. The application of Deep Learning in the industrial context led to an increasing demand for safe and trustworthy AI. When implemented in safety critical environments, the behavior of neural networks has to remain predictable and reliable even when facing faults within the system. This research aims to evaluate and increase the error resilience of AI components in safety critical applications. Generally, neural networks focusing on computer vision are susceptible to two distinct fault types: internal and external errors. The internal faults are often a result of so called Bitflips, which can lead to inaccurate computations and hence erroneous output of the neural network. The most common errors, however, occur on an external level and corrupt the input data, caused by for example blind spots or rain on the camera lenses. When confronted with these fault types, a neural network operating in a safety critical environment must still function reliably. In order to improve the reliability of artificial intelligence applications, faults within the network have to be detected efficiently and mitigated with appropriate counter measures. To do so, the most common computer vision network architectures are analyzed focusing on the impact the injected faults have on their performance. After this first step, methods to reliably detect these different faults are implemented and tested, followed by mitigation actions to reduce the impact the error has on the neural network’s output. By this procedure, the errors are caught and do not go undetected, leading to a more stable and reliable output of the network.


Google Scholar: -






Akademische Mitarbeiter

Digitaler Zwilling für die Automatisierungstechnik

Intelligente und lernende Automatisierungssysteme

Komplexitätsbeherrschung in der Automatisierungstechnik

Risikoanalyse und Anomalieerkennung für vernetzte Automatisierungssysteme

Stipendiat Graduate School of Excellence advanced Manufacturing Engineering (GSaME)


Zum Seitenanfang