Join your hosts, Anton Chuvakin and Timothy Peacock, as they talk with industry experts about some of the most interesting areas of cloud security. If you like having threat models questioned and a few bad puns, please tune in!
The market already has Breach and Attack Simulation (BAS) for testing known TTPs. You’re calling this 'AI-powered' red teaming. Is this just a fancy LLM stringing together known attacks, or is there a genuine agent here that can discover a truly novel attack path that a human hasn't scripted for it?
Let's talk about the 'so what?' problem. Pentest reports are famous for becoming shelf-ware. How do you turn a complex AI finding into an actionable ticket for a developer, and more importantly, how do you help a CISO decide which of the thousand 'criticals' to actually fix first?
You're asking customers to unleash a 'hacker AI' in their production environment. That’s terrifying. What are the 'do no harm' guardrails? How do you guarantee your AI won't accidentally rm -rf a critical server or cause a denial of service while it's 'exploring'?
You mentioned the AI is particularly good at finding authentication bugs. Why that specific category? What's the secret sauce there, and what's the reaction from customers when you show them those types of flaws?
Is this AI meant to replace a human red teamer, or make them better? Does it automate the boring stuff so experts can focus on creative business logic attacks, or is the ultimate goal to automate the entire red team function away?
So, is this just about finding holes, or are you closing the loop for the blue team? Can the attack paths your AI finds be automatically translated into high-fidelity detection rules? Is the end goal a continuous 'purple team engine' that’s constantly training our defenses?
Also, what about fixing? What makes your findings more fixable?
What will happen to red team testing in 2-3 years if this technology gets better?
We often hear about the aspirational idea of an "IronMan suit" for the SOC—a system that empowers analysts to be faster and more effective. What does this ideal future of security operations look like from your perspective, and what are the primary obstacles preventing SOCs from achieving it today?
You've also raised a metaphor of AI in the SOC as a "Dr. Jekyll and Mr. Hyde" situation. Could you walk us through what you see as the "Jekyll"—the noble, beneficial promise of AI—and what are the factors that can turn it into the dangerous "Mr. Hyde"?
Let's drill down into the heart of the "Mr. Hyde" problem: the data. Many believe that AI can fix a team's messy data, but you've noted that "it's all about the data, duh." What's the story?
“AI ready SOC” - What is the foundational work a SOC needs to do to ensure their data is AI-ready, and what happens when they skip this step?
And is there anything we can do to use AI to help with this foundational problem?
How do we measure progress towards AI SOC? What gets better at what time? How would we know?
What SOC metrics will show improvement? Will anything get worse?
Drawing from the "Aspiring CIO and CISO" book's focus on continuous improvement, how have you seen the necessary skills, knowledge, experience, and behaviors for a CISO evolve, especially when guiding an organization through a transformation?
Could you share lessons learned about leadership and organizational resilience during such a critical period, and how does that experience reshape your approach to future transformations?
Many organizations are undergoing transformations, often heavily involving cloud technologies. From your perspective, what is the most crucial—and perhaps often overlooked—role a CISO plays in ensuring security is an enabler, not a roadblock, during such large-scale changes?
Have you ever seen a CISO who is a cloud champion for the organization?
Your best advice for a CISO meeting cloud for the first time?
What is your best advice for a CISO meeting AI for the first time?
How do you balance the continuous self-improvement and development with the day-to-day pressures and responsibilities?
In what ways is the current wave of enterprise AI adoption different from previous technology shifts? If we say “but it is different this time”, then why?
What is your take on “consumer grade AI for business” vs enterprise AI?
A lot of this sounds a bit like the CASB era circa 2014. How is this different with AI?
The concept of "routing prompts for risk and cost management" is intriguing. Can you elaborate on the architecture and specific AI engines Witness AI uses to achieve this, especially for large global corporations?
What are you seeing in the identity space for AI access? Can you give us a rundown of the different tradeoffs teams are making when it comes to managing identities for agents?
You invented the concept of SOAPA – Security Operations & Analytics Platform Architecture. As we look towards SOAPA 2025, how do you see the ongoing debate between consolidating security around a single platform versus a more disaggregated, best-of-breed approach playing out?
What are the key drivers for either strategy in today's complex environments? How can we have both “decoupling” and platformization going at the same time?
With all the buzz around Generative AI and Agentic AI, how do you envision these technologies changing the future of the Security Operations Center (and SOAPA of course)?
Where do you see AI really work today in the SOC and what is the proof of that actually happening? What does a realistic "AI SOC" look like in the next few years, and what are the practical implications for security teams?
“Integration” is always a hot topic in security - and it has been for decades. Within the context of SOAPA and the adoption of advanced analytics, where do you see the most critical integration challenges today – whether it's vendor-centric ecosystems, strategic partnerships, or the push for open standards?