AI

Making AI Useful in the SOC: Data, Metrics & Human Skills with Dr. Anton Chuvakin @ Google Cloud

Qevlar AI team
Making AI Useful in the SOC: Data, Metrics & Human Skills with Dr. Anton Chuvakin @ Google Cloud

The promise of AI-powered Security Operations Centers (SOCs) dominates industry conversations today, but how much of this excitement is justified? While vendors promise everything from "AI SOCs" to fully autonomous security operations, the reality is far more nuanced.

Ahmed Achchak, the co-founder and CEO of Qevlar AI, explores this critical gap between hype and reality with Dr. Anton Chuvakin, Security Solution Strategist at Google Cloud and former Research VP at Gartner, where he coined the term "EDR" (Endpoint Detection and Response).

In this interview, Dr. Chuvakin:
→ Explains the crucial difference between "AI in SOC" and "AI SOC"  and why one is realistic while the other isn't
→ Reveals why data quality, not AI advancement, is the real bottleneck for autonomous security operations
→ Shares specific metrics for measuring AI success in SOCs (including Google's internal research showing 53% improvement in incident response tasks)
→ Discusses which human skills become more critical in an AI-augmented environment and which become less important
→ Addresses why many organizations are still struggling with cloud adoption while chasing AI transformation

Whether you're a security leader evaluating AI vendors or a SOC analyst wondering about your future role, this conversation cuts through the marketing noise to deliver practical insights on what AI can realistically achieve in security operations today.

The AI SOC Reality Check

Ahmed: The term "AI SOC" is everywhere today, but it feels like déjà vu. What real problems are we solving with AI, and where do you see over-promising?

Dr. Chuvakin: After RSA 2025, one line stuck in my head: "I definitely want AI in SOC. We will see AI in SOC, but we probably won't see AI SOC." There's a crucial distinction here.

When I talk about AI in SOC, I'm thinking about specific use cases: summarization, alert triage support, detection engineering assistance, and other targeted applications. But when some vendors say "AI SOC," they're implying that AI is the entirety of the SOC, that AI runs everything or handles every single SOC task. To me, that's absolutely insane in 2025, and it will likely remain insane for years to come.

The favorite counter-argument I hear is, "Anton, you're not considering how fast AI progresses." But here's the thing: there's no rapid progress in the quality of enterprise data. In fact, I sometimes see regression. I've met old-timers who worked with SIEM 20 years ago and lamented how Vendor A in 2004 had better data quality than Vendor S in 2024.

I definitely want AI in SOC. We will see AI in SOC, but we probably won't see AI SOC.

Ahmed: That's a critical point about data quality being the Achilles' heel of AI models. In real production environments, you often don't have complete information in your CMDB or other data sources.

Dr. Chuvakin: Exactly. If AI improves by a factor of 10 million, but your data still has the same gaps, that rapid progress argument falls apart. Your success in 2027 will be just as limited as today if your data sources don't improve.

Measuring AI Success: Beyond the Hype

Ahmed: If we accept that AI will never be perfect, how should we evaluate success? What KPIs do you use to determine if an AI system is helping or harming an organization?

Dr. Chuvakin: I'll be a bit boring here and call back to the good old days of SOC automation. The productivity and efficiency argument still holds. If you have a team of 3 doing the work of a team of 10 by objective measure, that's success.

Now, productivity measurement in SOCs is tricky. You can be productive at things you shouldn't be doing. You might be five times better at dispatching toil instead of preventing it. But ultimately, if you're comparing a heavily automated, AI-infused approach, it better be objectively better than what you have today.

We did some internal research where certain incident response tasks became 53% more effective with Gen AI. That's not 10X, it's 2X, but for people doing the work, it's quite amazing.

Ahmed: That aligns with our measurements. We typically see 60-70% improvements in similar cases. What about quality metrics like accuracy?

Dr. Chuvakin: Quality metrics are definitely important and feasible to measure, but there's a lot of devil in the details. You might decrease false positives by a factor of 10, making your signal cleaner, but that doesn't automatically mean you'll detect the right things.

As my colleague Ellie Mellon from Forrester says, there's a set of SOC metrics, but there isn't the SOC metric. You need time-based metrics, coverage metrics, and others. In a broad AI SOC vision, all of them should improve because of AI.

The Human Element: Skills That Matter More (and Less)

Ahmed: In an AI-augmented SOC, what human skill becomes more important, and what becomes less critical?

Dr. Chuvakin: The skill that becomes more important is critical thinking — the ability to judge when something is BS. Today's AI, and the AI of the future, will sometimes produce things that just aren't true. I just came back from vacation where I had great success with Gemini for vacation planning, but massive challenges getting it to understand Italian trains because it suggested trains based on generic schedules, not accounting for strikes.

AI will be wrong today, tomorrow, and for the foreseeable future. Humans need to judge when it's wrong, and that requires critical thinking.

Ahmed: What about skills that become less important?

Dr. Chuvakin: This is tricky because my first instinct was to say "looking things up", that that agents and automation will gather and present information to you. But then I realized: how do you gather facts to judge whether AI is correct? You look them up.

So I'll thread the needle: routinely looking things up will become less important, but don't lose that muscle altogether. You'll need it for things that don't pass your intuition test or trigger anomalous reactions.

Quick Takes: The Fire Round

Ahmed: Time for our fire round. What's one piece of advice you're tired of giving but still repeat often?

Dr. Chuvakin: Don't collect what you don't need. Even in this era of relatively cheap storage, if you collect it, someone somewhere is paying for hard drives and probably charging you for it.

Ahmed: What's one security decision organizations delay too long and regret?

Dr. Chuvakin: The transformation to more engineering-led SOCs. We all read the Netflix SOC-less blog from 2018—seven years ago—and people think it's a great idea but still try to squeeze a little more out of analysts watching screens and clicking things. People react positively to engineering-led detection and response but don't act on it.

Ahmed: What's one lesson from working with SOCs that surprised you?

Dr. Chuvakin: In 2025, as we discuss AI transformation, there are still enough businesses that haven't embraced cloud and enough SOCs that don't get cloud. I'm constantly shocked by this. I have a 2021 presentation about SOCs encountering cloud for the first time, and people still find it relevant.

We're in the middle of an AI upheaval, but some people are still struggling with the previous upheaval. For some SOCs, cloud is still new technology. If you encounter a SOC struggling with "new technology," you might assume it's AI, but it could still be cloud.

The Bottom Line

Dr. Chuvakin's perspective offers a valuable counterbalance to the AI hype cycle. While AI will undoubtedly transform security operations, the path forward requires realistic expectations, quality data, and humans who can think critically about AI outputs. The future is about augmenting analysts’ capabilities while maintaining the essential human skills of judgment and critical thinking.


🎧 Listen on Spotify


🍏 Listen on Apple Music

Subscribe to our newsletter

Get started with our pilot program. See results immediately

Book a demo call with us
Cross form
Success form
Thank you for you interest!
Your request has been successfully sent!
We appreciate your interest in booking a demo with us. Our team will review your request and get back to you within the next 24 hours.
What's Next?
Cross form
Oops! Something went wrong while submitting the form.
Book a demo call with us
Cross form
Success form
Thank you for you interest!
Your request has been successfully sent!
We appreciate your interest in booking a demo with us. Our team will review your request and get back to you within the next 24 hours.
What's Next?
Cross form
Oops! Something went wrong while submitting the form.