AI

AI vs Automation in the SOC: What Security Leaders Need to Know

Qevlar AI team
AI vs Automation in the SOC: What Security Leaders Need to Know

One of the most talked-about transformations happening right now is the shift from traditional, rule-based automation to AI-powered decision-making within Security Operations Centers (SOCs). But what does that shift actually look like in practice?

Ahmed Achchack, the co-founder and CEO of Qevlar AI, sat down with Filip Stojkovski, who is leading SecOps team at Snyk and author of the CyberSecurity Automation and Orhestration blog, to explore this critical shift.

In this interview, Filip:

→ Unpacks the limitations of legacy SOAR playbooks

→ Discusses the difference between “AI for SOAR” and truly autonomous SOC agents

→ Shares how organizations can evaluate the ROI of AI tools, the metrics that matter beyond speed, and what to watch out for when selecting AI-based SOC solutions.

Whether you’re a CISO evaluating vendors or a SOC engineer planning for the future, this conversation provides a grounded, strategic lens on what’s next for SecOps and why it matters.

Ahmed: Filip, you've written extensively about the shortcomings of traditional SOCs and SOAR playbooks not just technically, but organizationally. Can you unpack what’s broken and why so many teams are still clinging to this model?

Filip:  I think we're at an awkward stage where the term "AI" is being slapped onto almost everything as a catch-all for innovation. In the security automation space, I see two categories:

  1. AI for SOAR – where AI helps you build automations more efficiently (e.g., describing workflows in plain English and getting skeleton playbooks generated).
  2. Autonomous SOC – where AI isn't helping you build, it's doing the work itself.

The key difference lies in autonomy and context. Traditional automations are very rigid: "If A, do B, then C." But AI agents can be given a problem and figure out how to solve it, even with incomplete data. That’s a significant leap forward.

Ahmed: So when does automation stop being helpful? When does it start slowing down a SOC?

Filip:  I like to use the self-driving car analogy. Most SOCs are at Level 1 or maybe 1.5 automation — basic tasks handled, but no real intelligence. Problems arise when teams build 50+ automations and suddenly spend all their time maintaining them, fixing broken APIs, adjusting logic, etc. You fall into this trap of just keeping the lights on.

AI starts adding value when it can reason through ambiguity and decide on next steps. That’s the real inflection point: when you move from maintaining automations to enabling autonomy.

Ahmed: That makes a lot of sense. So in your experience, where do human analysts still outperform AI? And where should we confidently hand off tasks to machines?

Filip: Humans still play a key role in bridging the gap between investigation and response. Once there's a verdict — say, an alert is malicious — a human is still best at understanding the broader context and deciding what happens next. Is it time to remediate? Is it a false positive? Do we go back to detection engineering?

I believe security teams will evolve to focus more on engineering and agent management not just alert triage. Analysts will still exist, but their roles will become more hybrid: analyst + engineer.

AI starts adding value when it can reason through ambiguity and decide on next steps. That’s the real inflection point: when you move from maintaining automations to enabling autonomy.

Ahmed: A lot of teams are trying to figure out how to measure ROI when they adopt AI in the SOC. What metrics do you recommend?

Filip:  Speed used to be the holy grail ("We closed alerts in under 10 seconds!"), but now it's about trust.

Alongside traditional metrics like mean time to detect/respond, we need new ones:

  • Escalation Accuracy – How often does the AI escalate the right alerts to humans?
  • Decision Accuracy – True/false positives.
  • Feedback Loop Health – Is the AI learning from feedback, or is it just closing alerts blindly?
  • Explainability Time – How long does it take an analyst to understand the AI’s decision and know what to do next?

Also, I suggest teams run their AI in “shadow mode” — let it work in parallel and compare its decisions to those of human analysts. That’s a great way to evaluate effectiveness.

Ahmed: Let’s say a CISO is looking to adopt an AI SOC analyst. What are the top three questions they should ask before buying?

Filip:

  1. Understand Your Use Cases – Does the platform handle your alert types and operational model? Is it just a glorified co-pilot, or truly autonomous?
  2. Assess Coverage and Integration – Can it connect to your SIEM and security tools directly? Does it cover 90%+ of your alerts, or just a fraction?
  3. Auditability and Access Control – Is the system transparent? Can you audit its decisions and see what data it accessed? Be wary of platforms asking for full admin access. Start with read-only and escalate carefully.

Conclusion

The key insight from this conversation is that success requires strategic thinking beyond just technology adoption. It demands new metrics, evolved roles, and a clear understanding of where human expertise remains irreplaceable. Most importantly, it requires security leaders who can distinguish between AI-enabled automation and truly autonomous systems.

The question for security leaders isn't whether to adopt AI in their SOCs, but how quickly they can build the capabilities needed to harness its potential effectively. In a threat landscape that never sleeps, that timeline matters more than ever.

Found this interview interesting? Listen to the full episode on SuperSOC podcast.

🎧 Apple podcasts

🎙️ Spotify

Subscribe to our newsletter

Get started with our pilot program. See results immediately

Book a demo call with us
Cross form
Success form
Thank you for you interest!
Your request has been successfully sent!
We appreciate your interest in booking a demo with us. Our team will review your request and get back to you within the next 24 hours.
What's Next?
Cross form
Oops! Something went wrong while submitting the form.
Book a demo call with us
Cross form
Success form
Thank you for you interest!
Your request has been successfully sent!
We appreciate your interest in booking a demo with us. Our team will review your request and get back to you within the next 24 hours.
What's Next?
Cross form
Oops! Something went wrong while submitting the form.