The promise of AI-powered Security Operations Centers (SOCs) dominates industry conversations today, but how much of this excitement is justified? While vendors promise everything from "AI SOCs" to fully autonomous security operations, the reality is far more nuanced.
Ahmed Achchak, the co-founder and CEO of Qevlar AI, explores this critical gap between hype and reality with Dr. Anton Chuvakin, Security Solution Strategist at Google Cloud and former Research VP at Gartner, where he coined the term "EDR" (Endpoint Detection and Response).
In this interview, Dr. Chuvakin:
→ Explains the crucial difference between "AI in SOC" and "AI SOC" and why one is realistic while the other isn't
→ Reveals why data quality, not AI advancement, is the real bottleneck for autonomous security operations
→ Shares specific metrics for measuring AI success in SOCs (including Google's internal research showing 53% improvement in incident response tasks)
→ Discusses which human skills become more critical in an AI-augmented environment and which become less important
→ Addresses why many organizations are still struggling with cloud adoption while chasing AI transformation
Whether you're a security leader evaluating AI vendors or a SOC analyst wondering about your future role, this conversation cuts through the marketing noise to deliver practical insights on what AI can realistically achieve in security operations today.
Dr. Chuvakin: After RSA 2025, one line stuck in my head: "I definitely want AI in SOC. We will see AI in SOC, but we probably won't see AI SOC." There's a crucial distinction here.
When I talk about AI in SOC, I'm thinking about specific use cases: summarization, alert triage support, detection engineering assistance, and other targeted applications. But when some vendors say "AI SOC," they're implying that AI is the entirety of the SOC, that AI runs everything or handles every single SOC task. To me, that's absolutely insane in 2025, and it will likely remain insane for years to come.
The favorite counter-argument I hear is, "Anton, you're not considering how fast AI progresses." But here's the thing: there's no rapid progress in the quality of enterprise data. In fact, I sometimes see regression. I've met old-timers who worked with SIEM 20 years ago and lamented how Vendor A in 2004 had better data quality than Vendor S in 2024.
I definitely want AI in SOC. We will see AI in SOC, but we probably won't see AI SOC.
Dr. Chuvakin: Exactly. If AI improves by a factor of 10 million, but your data still has the same gaps, that rapid progress argument falls apart. Your success in 2027 will be just as limited as today if your data sources don't improve.
Dr. Chuvakin: I'll be a bit boring here and call back to the good old days of SOC automation. The productivity and efficiency argument still holds. If you have a team of 3 doing the work of a team of 10 by objective measure, that's success.
Now, productivity measurement in SOCs is tricky. You can be productive at things you shouldn't be doing. You might be five times better at dispatching toil instead of preventing it. But ultimately, if you're comparing a heavily automated, AI-infused approach, it better be objectively better than what you have today.
We did some internal research where certain incident response tasks became 53% more effective with Gen AI. That's not 10X, it's 2X, but for people doing the work, it's quite amazing.
Dr. Chuvakin: Quality metrics are definitely important and feasible to measure, but there's a lot of devil in the details. You might decrease false positives by a factor of 10, making your signal cleaner, but that doesn't automatically mean you'll detect the right things.
As my colleague Ellie Mellon from Forrester says, there's a set of SOC metrics, but there isn't the SOC metric. You need time-based metrics, coverage metrics, and others. In a broad AI SOC vision, all of them should improve because of AI.
Dr. Chuvakin: The skill that becomes more important is critical thinking — the ability to judge when something is BS. Today's AI, and the AI of the future, will sometimes produce things that just aren't true. I just came back from vacation where I had great success with Gemini for vacation planning, but massive challenges getting it to understand Italian trains because it suggested trains based on generic schedules, not accounting for strikes.
AI will be wrong today, tomorrow, and for the foreseeable future. Humans need to judge when it's wrong, and that requires critical thinking.
Dr. Chuvakin: This is tricky because my first instinct was to say "looking things up", that that agents and automation will gather and present information to you. But then I realized: how do you gather facts to judge whether AI is correct? You look them up.
So I'll thread the needle: routinely looking things up will become less important, but don't lose that muscle altogether. You'll need it for things that don't pass your intuition test or trigger anomalous reactions.
Dr. Chuvakin: Don't collect what you don't need. Even in this era of relatively cheap storage, if you collect it, someone somewhere is paying for hard drives and probably charging you for it.
Dr. Chuvakin: The transformation to more engineering-led SOCs. We all read the Netflix SOC-less blog from 2018—seven years ago—and people think it's a great idea but still try to squeeze a little more out of analysts watching screens and clicking things. People react positively to engineering-led detection and response but don't act on it.
Dr. Chuvakin: In 2025, as we discuss AI transformation, there are still enough businesses that haven't embraced cloud and enough SOCs that don't get cloud. I'm constantly shocked by this. I have a 2021 presentation about SOCs encountering cloud for the first time, and people still find it relevant.
We're in the middle of an AI upheaval, but some people are still struggling with the previous upheaval. For some SOCs, cloud is still new technology. If you encounter a SOC struggling with "new technology," you might assume it's AI, but it could still be cloud.
Dr. Chuvakin's perspective offers a valuable counterbalance to the AI hype cycle. While AI will undoubtedly transform security operations, the path forward requires realistic expectations, quality data, and humans who can think critically about AI outputs. The future is about augmenting analysts’ capabilities while maintaining the essential human skills of judgment and critical thinking.