One of the most talked-about transformations happening right now is the shift from traditional, rule-based automation to AI-powered decision-making within Security Operations Centers (SOCs). But what does that shift actually look like in practice?
Ahmed Achchack, the co-founder and CEO of Qevlar AI, sat down with Filip Stojkovski, who is leading SecOps team at Snyk and author of the CyberSecurity Automation and Orhestration blog, to explore this critical shift.
In this interview, Filip:
→ Unpacks the limitations of legacy SOAR playbooks
→ Discusses the difference between “AI for SOAR” and truly autonomous SOC agents
→ Shares how organizations can evaluate the ROI of AI tools, the metrics that matter beyond speed, and what to watch out for when selecting AI-based SOC solutions.
Whether you’re a CISO evaluating vendors or a SOC engineer planning for the future, this conversation provides a grounded, strategic lens on what’s next for SecOps and why it matters.
Filip: I think we're at an awkward stage where the term "AI" is being slapped onto almost everything as a catch-all for innovation. In the security automation space, I see two categories:
The key difference lies in autonomy and context. Traditional automations are very rigid: "If A, do B, then C." But AI agents can be given a problem and figure out how to solve it, even with incomplete data. That’s a significant leap forward.
Filip: I like to use the self-driving car analogy. Most SOCs are at Level 1 or maybe 1.5 automation — basic tasks handled, but no real intelligence. Problems arise when teams build 50+ automations and suddenly spend all their time maintaining them, fixing broken APIs, adjusting logic, etc. You fall into this trap of just keeping the lights on.
AI starts adding value when it can reason through ambiguity and decide on next steps. That’s the real inflection point: when you move from maintaining automations to enabling autonomy.
Filip: Humans still play a key role in bridging the gap between investigation and response. Once there's a verdict — say, an alert is malicious — a human is still best at understanding the broader context and deciding what happens next. Is it time to remediate? Is it a false positive? Do we go back to detection engineering?
I believe security teams will evolve to focus more on engineering and agent management not just alert triage. Analysts will still exist, but their roles will become more hybrid: analyst + engineer.
AI starts adding value when it can reason through ambiguity and decide on next steps. That’s the real inflection point: when you move from maintaining automations to enabling autonomy.
Filip: Speed used to be the holy grail ("We closed alerts in under 10 seconds!"), but now it's about trust.
Alongside traditional metrics like mean time to detect/respond, we need new ones:
Also, I suggest teams run their AI in “shadow mode” — let it work in parallel and compare its decisions to those of human analysts. That’s a great way to evaluate effectiveness.
Filip:
The key insight from this conversation is that success requires strategic thinking beyond just technology adoption. It demands new metrics, evolved roles, and a clear understanding of where human expertise remains irreplaceable. Most importantly, it requires security leaders who can distinguish between AI-enabled automation and truly autonomous systems.
The question for security leaders isn't whether to adopt AI in their SOCs, but how quickly they can build the capabilities needed to harness its potential effectively. In a threat landscape that never sleeps, that timeline matters more than ever.
Found this interview interesting? Listen to the full episode on SuperSOC podcast.
🎧 Apple podcasts