AIRE: The Big Picture on AI Risks

How to Analyse AI Security Threats and Communicate Them Clearly—Without Overloading Your Stakeholders

Get the key frameworks and real-world examples you need to evaluate AI security risks fast

Companies are adopting Large Language Model (LLM) based genAI systems at an alarming rate. Think customer service chatbots, interactive support co-pilots, research assistants, classification systems, and so on.

What you will learn in this workshop:

  • The surprising AI security risks most companies overlook—until it’s too late
  • How to explain AI risks to stakeholders so they actually listen (and act on your advice)
  • Forget endless research—this method helps you pinpoint AI security gaps in minutes
  • What AI red teams have uncovered about LLM security (and why you should be concerned)
  • The frameworks that make AI risk analysis clear, structured, and actionable
  • Why managing AI risks isn’t just about security—it’s about credibility in your role
  • The biggest mistake IT teams make when securing AI systems (and how to fix it fast).
  • Who needs to be looking at AI risk, and why it matters
  • The one thing you should never ever let an LLM based AI do
  • The hidden risks of LLM-powered systems—what every IT risk assessor needs to know
  • Why corporations can’t just blindly accept any AI system into production - and how you can drive that point home
  • My favorite diagram technique for analysing cloud systems for security and AI systems in particular
  • How to apply time proven IT risk principles that you already know to AI systems
  • The new (and old) frameworks that help organise risks and controls on AI systems
  • Where to find great sources of up to date AI security knowledge that you can easily digest
  • How I will support you after the workshop is over
  • The three most important elements of any IT system that I start looking for first, which helps me zoom in fast on risks
  • Why you should never leave your AI unattended
  • The typical flaws that an AI red-team finds
  • What the experts say on the elements that an AI management system should have
  • The one question you should ask any provider of AI systems

100% LIFETIME GUARANTEE

If you do not get value from this training and its materials you can claim a full refund at any time for any reason. Simply email your receipt and you will be refunded inside 3 working days.

WHAT YOU GET

There is 90 minutes of video lessons, workbook questions, and a significant, curated collection of background material.

The course also has comment areas that are monitored by the author, and your fellow students.


Your Instructor


Dr. Peter HJ van Eijk
Dr. Peter HJ van Eijk

I am one of the most experienced independent IT security and cloud trainers worldwide. Since 2011 I am focussed on developing and delivering training, mainly related to business value and business risk of cloud computing, but also in Zero Trust, governance, audit and Artificial Intelligence.

My background is broad. I worked as a researcher and instructor at Twente University, as a project leader and consultant at EDS and an internet provider, and as an IT strategy, IT risk and digital infrastructures consultant at Deloitte.

I have done strategy and implementation projects at small and large organisations and public sector, across the world.

In the past years I had an additional position as associate professor of cyber security and cloud.


Let's get started

Introduction to the course


Frequently Asked Questions


How long do I have access to the course?
How does lifetime access sound? After enrolling, you have unlimited access to this course for as long as you like - across any and all devices you own.
What if I am unhappy with the course?
We would never want you to be unhappy! If you are unsatisfied with your purchase, contact us in the first 30 days and we will give you a full refund.

Get started now!