BaFin's Expectations for ICT Risk Management and the Use of AI
In Short
The Situation: The German Financial Supervisory Authority ("BaFin") has issued non-binding guidance ("Guidance") clarifying how financial institutions should manage Information and Communication Technology ("ICT") risks arising from Artificial Intelligence ("AI")-based systems in financial institutions under Regulation (EU) 2022/2554 ("DORA") and related EU regulations.
The Result: AI systems—particularly generative AI and large language models ("LLM")—must be fully embedded into existing ICT governance, testing, and third-party risk frameworks, with heightened supervisory scrutiny.
Looking Ahead: Financial institutions using or planning to deploy AI should reassess governance, testing, cloud outsourcing, and incident reporting practices to meet evolving supervisory expectations.
The Guidance is intended to provide financial institutions subject to BaFin supervision additional direction on the treatment of AI-based systems under DORA and on related third-party and outsourcing risks under Delegated Regulation (EU) 2024/1774 on the ICT Risk Management Framework ("RTS RMF") and Delegated Regulation (EU) 2025/532 on outsourcing. The Guidance also contains a case study of an institution operating an LLM-based AI assistant across different infrastructures and an illustrative analysis of the risks involved and the respective treatment under Regulation 575/2013 (CRR) and Directive 2009/138/EG (Solvency II). According to the Guidance, financial institutions have to ensure that AI-based systems are consistently governed, secured, and monitored within their established DORA-compliance frameworks, including:
- Adopting a management-approved AI strategy, defining clear responsibilities, building AI competencies and ensuring interdisciplinary collaboration, particularly where AI supports critical or important functions. The AI strategy should be aligned with a technology roadmap covering ICT resources, ICT capacity, and ICT investments. Further, an internal governance and control framework should be set up to address ICT risks. The management body remains accountable for AI-related ICT risk oversight.
- Integrating AI-based systems into the DORA-compliant ICT risk management framework, covering identification, protection, detection, incident response, recovery, training, and crisis communication, including adversarial actions.
- Applying robust development, change management, and documentation standards to in-house AI developments, including those outside the core IT function. Particular attention should be paid to the use of open-source components and AI-assisted code generation, as these may introduce hidden dependencies or vulnerabilities.
- Extending testing obligations to AI-based systems in the same way as to other ICT systems, with scope and depth depending on criticality. For generative AI and LLM, testing is more challenging due to their complexity and frequent model updates by third-party providers. Testing must verify fit-for-purpose, including the quality of internally developed software, with special attention to generative AI/LLMs, given their complex architectures and reliance on peripheral models.
- Applying defined processes during operation to AI-based systems covering ICT-asset identification, classification, and documentation, capacity and performance monitoring, access control, logging, anomaly detection, incident response, business continuity, and backup. Policies should include secure uninstallation procedures to irretrievably remove AI-based systems and deactivate outdated versions to prevent misuse.
- Emphasizing the importance of ICT third-party risk management, given the widespread reliance on cloud services for AI and AI-based systems, including conducting thorough risk assessments (in line with BaFin and Deutsche Bundesbank's supervisory notice on cloud outsourcing of February 1, 2024), performing due diligence prior to contracting, and establishing clear contractual provisions on sub-outsourcing, security requirements, audit and access rights, service levels, incident reporting, exit strategies and portability of models and data, as well as regular testing of contingency scenarios.
- Applying cybersecurity and data security controls across the AI lifecycle, proportionate to system criticality, including access controls, logging, encryption, and data protection measures. While DORA focuses on data integrity, the Guidance notes that data quality, particularly for AI training data, is also a key prerequisite for reliable use of AI-based systems.
- Finally, ensuring that ICT-related incidents involving AI-based systems are identified, assessed, and reported with AI-specific detection, impact analysis, and severity classification integrated into existing incident response processes.
Three Key Takeaways
- The Guidance confirms that AI-based systems are not subject to a separate regime. They are subject to, and must be embedded into, existing DORA‑compliant ICT governance across the entire lifecycle of the AI-based system.
- While the Guidance is non-binding, we expect supervisors to refer to it as a de-facto benchmark, with the result that AI systems—particularly generative AI and LLM—need to be fully embedded into existing ICT governance, testing, and third-party risk frameworks, with heightened supervisory scrutiny.
- With respect to AI-based systems, financial institutions should continue to focus on robust third‑party and cloud risk management, end‑to‑end cybersecurity and data security (with emphasis on training data quality), and AI‑related incident detection, classification, and reporting to ensure compliance with supervisory expectations.