Dimitri Vedeneev
Executive Director, Secure AI at CyberCX. Lead author of the 2026 Hack Report; section author for the “Hacking AI systems” chapter — the most detailed AI-vulnerability taxonomy in this KB.
Positions captured in the KB
- AI testing has moved from rarity to daily occurrence. “In the past two years, conversations with customers about security testing AI systems have moved from a handful in a year to a daily occurrence.”
[[2026-05-12-cybercx-2026-hack-report]] - AI systems ship at lower security maturity than other systems. Traditional security patterns (threat modelling at design, pen-test before deployment) “are often not fit for the pace and urgency of AI development, meaning AI systems are deployed to production at a lower level of security maturity than other systems.” This is the proximate cause of the 50% severe-finding rate in AI pen-tests.
[[2026-05-12-cybercx-2026-hack-report]] - The MCP attack-surface claim: “new standards like Model Context Protocol (MCP) are being adopted, but are not yet secure, enterprise-ready implementations. … data can flow bi-directionally between servers and clients, meaning that traditional security controls implemented on the server side of an application must now be implemented on the client side too. This is creating a rise in authentication-related issues with MCP implementations.”
[[2026-05-12-cybercx-2026-hack-report]] - The most common AI vulnerability classes: in-model IAM / excessive agency; weak, missing or in-model guardrails; prompt injection; lack of content filtering; system-prompt exposure; implicit model bias; insecure adoption of new standards (MCP).
[[2026-05-12-cybercx-2026-hack-report]]
See also
- cybercx
- jason-edelstein — Hack Report foreword author, STA Global Executive Director
- ai-security-defense · ai-governance-au · claude-mcps