top of page

About me

Specialized expertise in technical AI safety and risk assessment

My name is Stephanie Abrecht and as a consultant for technical AI safety, I support companies in understanding and implementing the complex requirements for safe AI systems.

 

My focus is on translating technical AI risks and safety principles into clear, actionable measures. I help tech, product, and governance teams understand model behavior, identify system boundaries, and develop meaningful safety strategies.

 

I can explain complex AI safety issues in an understandable way and help integrate them into existing processes and governance structures.

Background

Robert Bosch GmbH
Engineer and Tech Lead Safe AI | 7 years

I am a long-time expert in AI safety. For seven years, I worked at Robert Bosch GmbH on Safe AI as an engineer and tech lead, where I co-developed methods for safe AI systems in the area of perception for highly automated driving.

 

This area is highly safety-critical: if AI makes a mistake here, it could lead to serious consequences. The safety of road users could be directly endangered. My focus was on how a machine understands its environment and how we can ensure that it functions reliably under real-world conditions.

EY Forensic Technology and Discovery Services
Data Analysis & Compliance | 2016-2017

At EY, I worked as a consultant in the area of Forensic Technology and Discovery Services, where I was responsible for data analysis for compliance and regulatory issues.​

Education
Bachelor of Science in Physics (Free University of Berlin)
Master of Science in Neural Systems and Computation (ETH Zurich and University of Zurich)
Technical Depth

A sound understanding of AI systems, model behavior, and technical risks.

Clear Communication

Translation of complex topics for various stakeholders and teams

Practical Orientation

Solutions that can be integrated into existing processes and structures

My approach

I see myself as a sparring partner for companies that want to approach technical AI safety, safety argumentation, and product safety in a sound yet pragmatic way.

​

I support teams in developing clear, technical safety evidence, categorizing risks, and cleanly integrating safety principles into development and governance.

​

My approach combines engineering depth with governance structures. Many risks do not only arise in the model, but in usage, integration, and lack of context control. That's why I always view AI safety as an interplay of technology, product responsibility, and organizational processes and work closely with tech teams, product roles, and compliance.

Legal Notice

Privacy Policy

  • LinkedIn

© 2025 Stephanie Abrecht SA Safe AI. All rights reserved.

bottom of page