AI-Driven Threats & Defences in Financial Services
Introduction
AI creates distinct vectors for data exfiltration and fraud in the financial sector and raises significant compliance issues under laws like the General Data Protection Regulation (GDPR), Digital Operational Resilience Act (DORA), and EU AI Act.
This session analyses the mechanics of AI-driven attacks such as model inversion and deepfake social engineering alongside regulatory obligations. It addresses the practicalities of securing algorithmic decision-making systems and mitigating risks associated with generative AI in high-stakes financial environments.
What You Will Learn
This live and interactive session will cover the following:
- Taxonomy of AI-specific cyber threats: data poisoning, model evasion, and membership inference attacks
- Mechanisms of deepfakes and synthetic identity fraud in 'Know Your Customer' (KYC) processes
- Regulatory intersections: System resilience under the GDPR, DORA, and the EU AI Act
- Governance frameworks for high-risk automated decision-making and profiling (Article 22 GDPR)
- Defensive AI: Utilising machine learning for anomaly detection and fraud prevention
- Incident response protocols for algorithmic breaches and bias exploitation
Recording of live sessions: Soon after the Learn Live session has taken place you will be able to go back and access the recording - should you wish to revisit the material discussed.









