
Most SOCs believe they're not ready for AI yet. AI is clearly a topic that everyone talks about, but there's a conviction that it's something for later for many security operations centers. Meanwhile, others rush in, hoping AI will compensate for the gaps they've accumulated over time.
In this episode of the Super SOC Podcast, Ahmed Achchak, CEO and co-founder of Kevlar AI, sits down with Rafal Kitab, Director of SecOps and Incident Response at ConnectWise. Together, they challenge both mindsets and explore what reality lies in between.
Ahmed Achchak: Looking back at 2025, what, from your perspective, were the big AI promises that you kept hearing for SecOps? And which ones actually held up when you tried them in the real world from your experience or what you have seen otherwise?
Rafal Kitab: I think we kind of all understand that the premise of AI SOC is a continuum. There is this kind of adoption that you have to go through and at first it is likely going to make you work faster, then at some point it's going to take care of parts of your work, then to eventually maybe do entirety of your work for you when you can just sit back, relax and enjoy your day. And in my own testing, I could see that I am more productive with AI.
I could also see that I am not as productive as I thought I would be and talking to my peers it seems to be maybe the consensus that it works. They thought it would work faster. But then I also talked with them about how they were doing the testing, what they did beforehand and it kind of turns out that we were approaching AI SOC like this plug-and-play solution, which we kind of know it is not right, because if you do really short-lived testing on a small scope and you don't do any AI — let's call it readiness—on your end, you can't really expect miracles, right?
So that's what I believe happens these days. We are doing plenty of testing without being ready for it. And so if we feel that AI SOC doesn't land currently, two reasons for that. First, maybe AI SOC isn't there. It's hard to say from the outside when we don't know how those tools work under the hood. But I know how SOCs work under the hood and I can tell you right now that SOCs aren't ready very often to make the full use of what AI SOC can do for you.
And so we are kind of party to blame because we assume that AI is going to cover for our inability to do some things. At the same time, I believe that the AI SOC marketing is also to blame to some extent because often the way it works — and I'm sure you've seen that toon — we often portray SOCs as this hellscape, right? You know, thousands of alerts, mean time to whatever in months, alert fatigue, analysts burning out, etc.
And then we say, okay, so that's your current state, but then AI can help you, which seems to indicate that regardless of how bad you do your job as a SOC leader, AI can come in and fix your mistakes, which I don't believe is the case, which often does, I think, more harm than good. Because let's assume for a second that your SOC is poorly optimized. It is likely that it is so because of lack of investment in the SOC. And then if you are marketing for those SOCs, how likely are they to buy? I don't think it's very likely.
So essentially what's happening, on one hand we are saying that AI can help you even if you are inefficient. And then we kind of turn around and say, all right, but you need to do some AI readiness, which means you kind of have to be efficient. Those messages are contradicting one another, which kind of leaves us in the place where we don't really understand what AI is supposed to do for us kind of out of the box versus what I need to do to make AI work for me.
Ahmed Achchak: I'm a big follower of your posts and there's one of them where I believe you mentioned that maybe adding AI too early can actually accelerate inefficiency, which I found interesting as a wording. Can you walk us through a potential scenario where AI made the SOC worse and can you explain why it did it?
Rafal Kitab: Okay, so imagine this. Right now your SOC is dealing with hundreds of alerts every day. And by the way, whenever I see those SOC workload reports that quote numbers going into like 22000—I think the most recent I've read was like 22000 alerts per day on average between like 50 SOCs, which is crazy numbers to me, right? Because I don't think that reflects reality. But let's assume for a second that your SOC is not, that after fine tuning, after automation, you are dealing with like 22000 alerts every day.
And that to me means you can't find tune in your environment. If you now use AI to close those alerts for you, the only thing you're achieving is you essentially shovel the garbage faster. Your SLAs might be in green after you do that, but if we assume that your core kind of objectives as a SOC is to detect and respond, you are not detecting anything. You are not responding to anything if you just, you know, close things at scale using AI.
I believe you have to be very intentional with what you are detecting. If you are not, then you are relying on your luck and we just can't have that. So when I say about accelerating inefficiency, what I mean really is hiding your inefficiency because yes, your SLAs may be in green if you use AI to close those alerts, but your core thing to do, your main objective of detection and response isn't being achieved before AI or after AI.
Ahmed Achchak: I love what you're saying. That's a deep conviction for us. For example, we do believe that the core mission of a SOC is not necessarily to triage alerts. We do believe that the core mission of a SOC is to protect the organization, to detect and respond exactly as you mentioned. In our blog article, we essentially explained that from our perspective, SOCs are ill-designed. And what I mean by this is we are putting a lot of pressure on the team in terms of how many alerts do you go through, what's your mean time to investigate, et cetera. And I agree with you. Adding AI to it can give you the impression that you're solving that because your metrics improve. But actually, maybe because of this bad design, you're not really cracking the underlying problem.
Rafal Kitab: Yeah, absolutely. And just to maybe add to that, I saw this kind of argument that if you close the alerts, then you have more time to fix your environment, right? But closing alerts using AI, that is something that takes the load off your analysts. And it is not the job of the analyst to manage the workload of a SOC, right? That's the business of SOC leaders, right? So taking the load off analysts isn't going to make SOC leaders more competent. So yes, you can make analysts sleep better, but it isn't going to magically fix your issues. It's leaders who should aim to manage the workload. They should be the kind of moving force for fine-tuning the environment, for understanding how many alerts we can do every day, and just making sure the analysts are healthy. So I don't buy this argument. It's not something I agree with.
Ahmed Achchak: From your perspective, what are the non-negotiable foundations that a SOC must have in place before they can effectively leverage AI? What's the bare minimum in your view?
Rafal Kitab: I think there are three things that you need to have in order to use AI in a meaningful way. The first one is data quality. Whatever you put in, that's what you're going to get out of it. And if your data is terrible, your AI is going to produce terrible results. So you need to make sure that you have data quality.
The second thing is you need to have processes. You need to understand what processes you are doing right now, how effective they are, what could be improved. Because if you think about it, if AI is supposed to either make us more efficient or do the work for us, you need to make sure that the work that you are automating is correct to begin with. Otherwise, you're just automating mediocrity, which, you know, bad process automated is just faster bad process.
And the third one, and I think it's kind of the most important, is that you need to have very clear objectives for what you are looking to do with AI. So you need to have SOC objectives, but also AI objectives. You need to understand, okay, I want AI to help me with, let's say, enrichment. I want AI to help me write stories. I want AI to help me write rules. You need to be very specific about what you want AI to do for you so that you can properly measure its impact on your workflows.
And those three to me, data quality, processes, and clear objectives, those are the three things you need to have before you start implementing AI. Otherwise, you're just going to be disappointed.
Ahmed Achchak: I actually had a conversation that's very close to this with one of our customers a few months ago where they were... It was almost a philosophy discussion which I do like. The question was how will AI affect the ability of SOC analysts to actually train? Because you can see the L1 analyst as almost a training position. Like you start on the lowest level of alerts. You learn from there and then you grow from there and you move up the ladder later. And the question becomes, okay, if AI is able indeed to handle these alerts, will humans learn? And I don't know about your exact perspective on this, but the conclusion we reached, and that was actually coming from the way their own analysts were using the product. The most junior ones are actually probably the biggest fans of it because they, all of a sudden they have a tool that actually helps them learn. It's just like having an L2 SOC analyst next to you that runs every investigation. You have the audit trail where you can see essentially every step it went through. And so actually the conclusion that they had was maybe the AI is actually helping these guys and these women, these folks learn actually faster. So what's your perspective on this?
Rafal Kitab: I think that's true. I think that we kind of overestimate how much an analyst learns on the job, especially a junior, right? If you work for two years as a junior analyst, you don't have two years of experience. You have two months repeated 12 times of experience. Because you keep repeating the same stuff. You learn something for the first two months, then you kind of are reheating that, like maybe learning a bit here and there, but working on the first level of SOC isn't conducive to the massive developments in terms of your knowledge, in terms of how good you are as an analyst.
So if we can use AI to just give you more perspective, give you more system, just help you work on maybe more difficult alerts, that makes it easier for you to learn. And also let's be honest, how much effort goes into teaching L1 analysts? And what better way to learn is there than just learning on the job.
I hear that if we are using AI, we are not learning because doing things manually is how you learn. I mean, come on, have you worked as a junior analyst? What are you doing? More copy pasting, more of SOP following than anything else. So let's relax with that take.
Ahmed Achchak: I will ask you a few questions. And please answer me with the first thing that comes to mind. Does that work for you?
Rafal Kitab: It works for me, let's go.
Ahmed Achchak: What's one security opinion that you will stand by even if 90% of the industry disagrees with you?
Rafal Kitab: You don't need IT experience to work in security. That's what I believe to be true. And that is something that evokes many emotions in people. I think you just don't have to do that. You don't have to take a detour through IT.
Ahmed Achchak: What's a SecOps myth that you wish to see vanish away?
Rafal Kitab: I think that the tiered approach to SOCs is antiquated and many people say it's needed. I think we don't really have to have that. And maybe a follow up is we don't really need to be so SLA focused as we are right now. The metrics we collect, we collect too much. Probably we should stop at some point.
Ahmed Achchak: What do you think is required to start in cybersecurity?
Rafal Kitab: Ethics. Really strong ethics. Without ethics you're useless.
Ahmed Achchak: If you could erase a security buzzword, you've mentioned that part of the buzz comes from marketing, but if there was a security buzzword that you could essentially remove from the industry for the year 2026, which one would it be?
Rafal Kitab: Zero trust for sure. Nothing's close to zero trust these days. Zero trust is like proper configuration of things. I think zero trust is completely overused these days. Nobody does it. I have zero trust in a company's ability to actually introduce zero trust properly. I think zero trust needs to go. I think today is the day we can or this year is the year we can just put this term to rest.
🎧 Listen to the full episode on Spotify
🍏 Listen to the full episode on Apple Podcasts