Advanced AI Risk Governance: Leveraging DPIAs for High-Risk Systems
Speaker
Introduction
As organisations deploy increasingly complex AI systems, a Data Protection Impact Assessment (DPIA) becomes a crucial governance tool to demonstrate compliance.
This advanced session assumes prior knowledge of DPIAs and focuses on their application to high-risk and sophisticated AI use cases, including Large Language Models (‘LLMs’) and automated decision-making systems. It explores how DPIAs can be adapted to address AI-specific risks such as opacity, bias and evolving functionality in both the UK, EU and further afield.
What You Will Learn
This live and interactive course will cover the following:
- A critical analysis of current DPIA expectations and advanced guidance from the UK Information Commissioner's Office (‘ICO’)
- Applying fairness, lawfulness, and transparency in AI systems where outcomes may include the profiling of an individual, generate ‘hallucinations’ or opaque in nature
- Assessing necessity and proportionality in high-risk AI use cases
- Analysis of AI architectures, including Large Language Models, training data pipelines, deployment environments, and third-party dependencies
- Integrating a risk-based methodology aligned with the EU AI Act, the EU Digital Omnibus and how this interacts with the UK AI framework
- Advanced techniques for identifying, documenting and mitigating AI-specific risks, such as bias amplification, function creep, model drift and secondary data use
- Embedding effective human oversight mechanisms that are meaningful, auditable and operationally realistic
Legal Geek Growth on 11 June is the UK’s leading event for small and medium sized law firms looking to modernise their practice, adopt the right legal tech, and future proof their business. The best part? It’s free for law firms! Claim your ticket
Recording of live sessions: Soon after the Learn Live session has taken place you will be able to go back and access the recording - should you wish to revisit the material discussed.