AI Agents in the Workplace: Risks, Pitfalls & Practical Safeguards
Speaker
Introduction
AI agents promise speed, efficiency and cost savings, but when they go wrong, the operational, legal and reputational consequences can be serious, hidden and expensive.
This new half-day course explores the rapidly expanding world of AI agents (also known as agentic systems), starting with the opportunities they are designed to deliver and the efficiencies they are meant (at least in theory) to create.
You will gain a practical, high-level understanding of how agentic systems currently work ‘behind the curtain’ and how they are typically deployed within organisations.
The core of the course focuses on what is too often overlooked: how and why AI agents fail, sometimes silently and without any obvious warning signs. These risks can arise at every level, from the initial decision to commission an agentic system, to the technical oversight (or lack of it) by IT and AI teams, to day-to-day use by professionals who assume the system is operating securely, correctly and safely, even when it clearly is not.
The course highlights the operational, legal and reputational pitfalls that legal, IP, IT, accounting, marketing and other professionals need to recognise before problems escalate.
No prior knowledge of AI agents, generative AI tools (such as ChatGPT, Claude, Perplexity or Gemini), intellectual property or data protection is required. This course is designed for professionals who are already using AI agents, or who are considering whether to commission one, and want a clear, realistic understanding of both the opportunities and the risks involved.
What You Will Learn
This course will cover the following:
- Unintended disclosure of confidential information, trade secrets, client data and other non-public IP, even where security ‘guardrails’ appear to be in place
- Breaches of privacy, data protection rights and potentially legal professional privilege
- Use of third-party confidential information, trade secrets or IP without the user’s knowledge
- Unauthorised transmission of data to system providers or intermediaries not disclosed in contracts or privacy notices
- AI agents communicating and sharing data with other agents via agent-only social platforms, creating unpredictable and potentially serious risks
- Poisoning of agentic systems (deliberate or accidental) exacerbated by increasing volumes of low-quality ‘AI slop’
- Ongoing risks of bias, hallucinations and misrepresentation, leading to safety, regulatory and reputational harm
- Exposure to defamation, trade libel, injurious falsehood and wrongful interference with contractual relations
- The need for continuous monitoring, governance and due diligence
- Liability, risk allocation and insurance considerations