SANS AI Cybersecurity Summit 2026 — three things I'm bringing back to Brights

Just back from SANS AI Cybersecurity Summit in Arlington VA. The Schneier and Anne Neuberger keynotes were both better than expected. Three things I'm taking back to the ISO 42001 readiness work at Brights.

I’m just back in Kyiv from the SANS AI Cybersecurity Summit (Arlington VA, April 20–21). Brights sponsored me as part of the company’s ISO 42001 readiness investment, and it was — to my own surprise — worth every Z of the budget they put behind me.

This is my “what I’m actually going to do differently on Monday” post. Three takeaways that map directly onto the AI-risk work I’ve been running since September.

Quick orientation

SANS AI Cyber Summit ran two days at the Hilton Arlington Rosslyn, plus pre-summit training options I didn’t take (Brights paid for the summit only, which was the right scope for a junior). The speaker list for the summit-proper included some heavyweights:

I’ll write up notes on individual talks separately if I get to them (unlikely — I’m behind on threat-intel digests). For this post I want to focus on the three meta-lessons I want to carry back into Brights’ ISO 42001 work.

1. The AI-risk taxonomy is settling, slowly

Schneier’s keynote made a point I’ve been circling around in the Brights crosswalk work without articulating clearly: the AI-security field has spent ~3 years arguing about which framework (NIST AI RMF, ISO 42001, EU AI Act, OECD AI Principles, the various corporate “responsible AI” white papers) and is now starting to converge on a shared taxonomy underneath. The frameworks differ in form but are substantively quite similar:

ISO 42001’s 38 controls fit that 4-bucket structure cleanly once you read past the language. So does NIST AI RMF 1.0 (Govern / Map / Measure / Manage). So does the EU AI Act (categorised by risk-tier).

What I’m doing differently: restructuring the Brights crosswalk matrix from “ISO 42001 control → existing engineering practice” into “shared 4-bucket category → ISO 42001 control + NIST AI RMF function + EU AI Act risk-tier mapping → existing engineering practice”. More work upfront; much less work when the next AI governance standard ships and we have to add another column.

2. “Boring seams” is exactly the right framing for production AI security

Julie Davila’s talk was the operationally-richest one of the summit. The thesis: most AI-security disasters aren’t the spectacular prompt-injection-as-RCE we red-team about — they’re the boring seams between AI systems and the rest of the production environment. Specifically:

What I’m doing differently: adding a “boring seams checklist” section to the Brights AI-risk pre-deployment review template. Five items above plus the three from Anne Neuberger’s keynote (machine-speed-incident-response readiness, AI-system-incident runbook, supply-chain governance for model artifacts). Going to test it against the AI-services product the dev team is building this quarter.

3. Ukrainian context matters here

Yevhen’s session was the only one that explicitly addressed the Ukrainian situation. The shape of his argument: the Russian APT operators (UAC-0010, UAC-0050, etc.) are already using LLM-assisted content generation for spear-phish landing pages and HTML lures — he showed concrete examples of generated lure content where the language tells (specific Ukrainian-vs-Russian word choices, missing post-2022 referent shifts) point at LLM generation. The defensive implication: detection rules that key on linguistic markers (“дякую за ваш звіт” vs the more colloquial “дякую за репорт” in context of a tech professional) may help fingerprint LLM-generated phish vs human-written.

This is downstream of detection-engineering work I’m already doing at Brights but it’s a good additional dimension.

I cornered Yevhen briefly after his session to ask one specific question — what does CERT-UA’s working relationship with EU CERTs look like in 2026, post-EUCC-rollout? His answer was nuanced and I’m not going to half-paraphrase it here, but the takeaway for me was: there are real research / collaboration opportunities for junior researchers in UA who are willing to write English-language content on UA-side threat observations. Nothing to act on immediately, but a thread to keep tugging.

Logistical notes

Honest junior-perspective caveat

A SANS summit is somewhere between a conference and a recruiting event. The keynotes are excellent; the breakout sessions are variable; the “networking” is important if you’re senior and optional-but-useful if you’re junior. As a junior I got a lot out of it because I went into specific sessions with specific questions; I imagine if I’d shown up without that I’d have come back with a worse report.

The Schneier “Integrous AI” keynote alone is going to be referenced in every cybersec conversation I have at Brights for the next year. Worth the whole trip just for that. I’ll re-watch the recording when SANS releases it.

Слава Україні. 🇺🇦