SANS AI Cybersecurity Summit 2026 — three things I'm bringing back to Brights
Just back from SANS AI Cybersecurity Summit in Arlington VA. The Schneier and Anne Neuberger keynotes were both better than expected. Three things I'm taking back to the ISO 42001 readiness work at Brights.
I’m just back in Kyiv from the SANS AI Cybersecurity Summit (Arlington VA, April 20–21). Brights sponsored me as part of the company’s ISO 42001 readiness investment, and it was — to my own surprise — worth every Z of the budget they put behind me.
This is my “what I’m actually going to do differently on Monday” post. Three takeaways that map directly onto the AI-risk work I’ve been running since September.
Quick orientation
SANS AI Cyber Summit ran two days at the Hilton Arlington Rosslyn, plus pre-summit training options I didn’t take (Brights paid for the summit only, which was the right scope for a junior). The speaker list for the summit-proper included some heavyweights:
- Bruce Schneier — “Integrous AI” (Day 1 keynote)
- Jacob Klein, Anthropic — “This Is Not a Forecast”
- Sounil Yu — “Claw and Order” (riff on “Cyber Defense Matrix” applied to AI systems)
- Anne Neuberger — “Our Machine-Speed Mandate” (former Deputy National Security Advisor; the policy / national-security framing)
- Julie Davila — “The Boring Seams” (Day 2 — practical AI security at scale; one of the best non-keynote talks of the summit)
- Diana Kelley — “Cram It Up Your Cramhole, LaFleur” (provocatively titled; the actual content was about adversarial prompt injection taxonomy)
- BG Reid J. Novotny — “Beyond the Hype” (military-cyber framing)
- Pliny the Liberator — “Sailing Towards Vesuvius” (Day 2 closer — mostly red-team-on-LLMs theatre, lower information density than I’d hoped)
- Yevhen Pervushyn (Red Asgard) — solo session (Day 2 — the Ukrainian-perspective slot. We chatted briefly after; he was generous with time for a junior asking basic questions.)
I’ll write up notes on individual talks separately if I get to them (unlikely — I’m behind on threat-intel digests). For this post I want to focus on the three meta-lessons I want to carry back into Brights’ ISO 42001 work.
1. The AI-risk taxonomy is settling, slowly
Schneier’s keynote made a point I’ve been circling around in the Brights crosswalk work without articulating clearly: the AI-security field has spent ~3 years arguing about which framework (NIST AI RMF, ISO 42001, EU AI Act, OECD AI Principles, the various corporate “responsible AI” white papers) and is now starting to converge on a shared taxonomy underneath. The frameworks differ in form but are substantively quite similar:
- Inputs: data lineage, training-set governance, IP/PII filtering before training, adversarial-input detection at runtime.
- Models: alignment, evaluation, explainability, vulnerability surface (prompt injection, data poisoning, model extraction).
- Outputs: harmful-content filtering, decision-explainability, audit logging, human-in-the-loop where stakes warrant.
- Lifecycle: deprecation, retraining, drift monitoring, post- deployment incident response.
ISO 42001’s 38 controls fit that 4-bucket structure cleanly once you read past the language. So does NIST AI RMF 1.0 (Govern / Map / Measure / Manage). So does the EU AI Act (categorised by risk-tier).
What I’m doing differently: restructuring the Brights crosswalk matrix from “ISO 42001 control → existing engineering practice” into “shared 4-bucket category → ISO 42001 control + NIST AI RMF function + EU AI Act risk-tier mapping → existing engineering practice”. More work upfront; much less work when the next AI governance standard ships and we have to add another column.
2. “Boring seams” is exactly the right framing for production AI security
Julie Davila’s talk was the operationally-richest one of the summit. The thesis: most AI-security disasters aren’t the spectacular prompt-injection-as-RCE we red-team about — they’re the boring seams between AI systems and the rest of the production environment. Specifically:
- Auth boundaries: an LLM agent that has its OWN auth context to a downstream API can do things the human-on-behalf-of can’t. This is auth-as-confused-deputy at AI scale.
- Data exfiltration via context: an LLM that has access to customer A’s data AND is exposed to customer B’s prompt is one jailbreak away from cross-customer leakage. The fix is data- silo-by-customer at the prompt-construction layer.
- Logging gaps: most logging stacks weren’t designed to handle multi-thousand-token prompt + response pairs at scale. Storing all the inputs/outputs is expensive; not storing them is an audit-evidence gap.
- Cost-runaway: an unguarded LLM endpoint is a denial-of-wallet vulnerability. Rate-limiting is mandatory but often forgotten in internal-only deployments.
What I’m doing differently: adding a “boring seams checklist” section to the Brights AI-risk pre-deployment review template. Five items above plus the three from Anne Neuberger’s keynote (machine-speed-incident-response readiness, AI-system-incident runbook, supply-chain governance for model artifacts). Going to test it against the AI-services product the dev team is building this quarter.
3. Ukrainian context matters here
Yevhen’s session was the only one that explicitly addressed the Ukrainian situation. The shape of his argument: the Russian APT operators (UAC-0010, UAC-0050, etc.) are already using LLM-assisted content generation for spear-phish landing pages and HTML lures — he showed concrete examples of generated lure content where the language tells (specific Ukrainian-vs-Russian word choices, missing post-2022 referent shifts) point at LLM generation. The defensive implication: detection rules that key on linguistic markers (“дякую за ваш звіт” vs the more colloquial “дякую за репорт” in context of a tech professional) may help fingerprint LLM-generated phish vs human-written.
This is downstream of detection-engineering work I’m already doing at Brights but it’s a good additional dimension.
I cornered Yevhen briefly after his session to ask one specific question — what does CERT-UA’s working relationship with EU CERTs look like in 2026, post-EUCC-rollout? His answer was nuanced and I’m not going to half-paraphrase it here, but the takeaway for me was: there are real research / collaboration opportunities for junior researchers in UA who are willing to write English-language content on UA-side threat observations. Nothing to act on immediately, but a thread to keep tugging.
Logistical notes
- Hilton Arlington Rosslyn was a good venue. The walk to Foggy Bottom Metro was maybe ~15 minutes; a few of us went into DC for dinner on Day 1.
- US visa for UA citizens — got mine via Warsaw consulate in Q1 2026 ahead of this trip. Plan ~3 months out from departure if you’re going through Warsaw.
- Worth combining BSides SF (San Francisco, March) and SANS AI Summit (Arlington, April) in a single US trip if you can get the visa to support it. I did this and it kept the per-event cost reasonable for Brights.
Honest junior-perspective caveat
A SANS summit is somewhere between a conference and a recruiting event. The keynotes are excellent; the breakout sessions are variable; the “networking” is important if you’re senior and optional-but-useful if you’re junior. As a junior I got a lot out of it because I went into specific sessions with specific questions; I imagine if I’d shown up without that I’d have come back with a worse report.
The Schneier “Integrous AI” keynote alone is going to be referenced in every cybersec conversation I have at Brights for the next year. Worth the whole trip just for that. I’ll re-watch the recording when SANS releases it.
Слава Україні. 🇺🇦