Member-only story
TryHackMe | AI Threat Modelling | WriteUp
Assess and mitigate enterprise AI/ML risks via systematic, defender-focused auditing.
Disclaimer: This writeup is based on a Capture The Flag (CTF) challenge hosted on TryHackMe and it is intended for educational purposes only.
Artificial intelligence isn’t something organisations are still waiting on; it’s already embedded in enterprise operations. Language models handle customer support tickets. Recommendation engines surface products to millions of users. Fraud detection systems make real-time decisions that affect people’s lives.
Behind every one of these deployments is an attack surface that most security teams have never been trained to assess.
Traditional threat modelling provides a strong foundation, and frameworks like STRIKE have helped defenders systematically identify security threats for over two decades. But AI systems introduce assets, behaviours, and failure modes that those frameworks weren’t designed to handle. Training data can be poisoned. Model weights can be stolen. Prompts can be injected. And the outputs? They’re non-deterministic, meaning the same system can behave differently each time it’s queried.
If your organisation is deploying AI (and chances are it is), your threat models need to…