
As managed detection and response providers scale to handle hundreds of enterprise clients, a fundamental tension emerges: how do you maintain detection quality across vastly different environments while keeping operations efficient?
In this conversation with Ahmed Achchak, the CEO and co-founder of Qevlar AI, Beatrice Françon, who leads global MDR service strategy at Atos, discusses how the company navigates this challenge while processing over a billion security events daily. She shares insights on where AI genuinely delivers value, why context matters more than uniformity, and how to keep human expertise central even as automation advances.
Read further to find out:
Ahmed Achchak: Atos has been integrating AI across detection, triage, and investigation. From your experience, where does AI actually deliver value for MDR?
Beatrice Francon: I see it mostly in two areas. The first is detection: complex pattern recognition and predictive modeling across huge amounts of data. The second is triage and investigation, where SOC analysts receive numerous alerts that need review. This can be extremely time-consuming.
You can do this with SOAR tools, but those playbooks can be very hard to maintain, and they're only as good as what you design them for.
Ahmed Achchak: Where do you think human expertise is still needed and remains the dominating factor?
Beatrice Francon: Humans remain critical in several areas. The first is response and containment. You can automate endpoint isolation, but complex responses still need human oversight. When there's a risk of lateral movement, you want to actively look for it. You need to understand what happened and why an endpoint had to be isolated.
Another area is threat hunting. This is the proactive side, where you need hypothesis-based hunting with very strong human skills. Our threat research center has experienced staff familiar with attackers and attack paths. They build hypotheses that we can factor into automations.
My favorite area is context. I don't believe in uniform detection and uniform investigation. The exact same alert can be malicious for one client and benign for another. It's all about context.
Ahmed Achchak: This is massive for MDR providers. Not everything lives in the CMDB, contrary to what many people believe. Finding ways for AI to learn or fetch context, or ask humans for it, is critical for improving investigation performance when you have multiple customers.
Beatrice Francon: Exactly. Something malicious for one client can be completely normal for another. It's not easy to build an all-encompassing framework unless you have access to a specific context.
Ahmed Achchak: You process over a billion security events per day at Atos. How do you design investigation workflows that balance the need for standard, uniform data with the flexibility required to reflect each client's unique risk and operational reality?
Beatrice Francon: It starts with a baseline: a set of detections addressing the most common threats. You can't reinvent everything each time. You need that baseline.
Then we go in steps. We deploy this baseline very fast into real client conditions, and then we tune. We turn it on and initially receive too many false positives. But I don't consider false positives bad at the beginning of a SOC deployment. It's actually the opposite. You get extremely useful learnings about your client context. This is where we go into the tuning phase.
In parallel, we learn about client risk through direct interaction with stakeholders to understand specific risks and assets. The baseline is key, but it needs to be complemented by tuning and by understanding the client's specific risks. In the context of AI, this means feeding the AI with context about what's specific to your client.
Ahmed Achchak: For highly regulated industries like critical infrastructure, clients expect not just accurate investigations but also auditability and explainability. How does that shape how you introduce AI into the investigation process?
Beatrice Francon: It's absolutely critical because we have to remain fully accountable. As an MSSP, we're accountable for the outcome. This means the AI cannot be a black box.
We expect any AI solution we introduce in our SOC, whether it's our own or a partner solution, to provide a step-by-step audit trail of everything that's happening. Every hash, every URL, everything must be audited. Then we can hand it over to a SOC analyst who can check step by step.
This brings us to the next concept: human in the loop. We're not talking about a fully autonomous black box, but really a human in the loop where I expect our SOC analyst to review the output and say, "Yes, I agree," or "I disagree, but here's the context." So next time, the solution will act as expected.
Two keywords: no black box and human in the loop. Beyond that, commitment to responsible AI, the EU AI Act, and attention to data privacy, residency, and security are extremely important.
Ahmed Achchak: Since you have multiple customers across different industries, one of the hardest operational challenges is making sure insights from one investigation strengthen the next. How do you approach this knowledge transfer?
Beatrice Francon: You can look at this from two complementary angles: knowledge and training.
Knowledge means establishing sharing across our SOC teams. Providing them with a threat intelligence platform to share findings about specific threats. But equally important is training.
We're in an industry where keeping and growing talent is extremely difficult. You need to grow junior analysts. But people ask, "If you introduce AI SOC analysts, how will you grow junior analysts?" It looks like a paradox, but it's actually the opposite.
Your AI investigation report can be used to train your SOC analysts, bringing junior analysts quicker to the next level. It's also way more friendly than looking at traditional SOPs. People are more eager to learn that way.
Also, your senior SOC staff are less buried in day-to-day work. They spend time mentoring juniors, but they can also focus more on where we want them to add value: understanding client risk, tuning detection, and threat hunting.
Ahmed Achchak: As automation becomes more sophisticated and AI takes more lead with freedom to explore, the line between static automation and AI-led investigation is blurring. How do you approach the integration between existing SOAR capabilities and AI SOC platforms? How do you make sure both add value rather than duplicate effort?
Beatrice Francon: I see the AI value becoming obvious in alert enrichment and investigation. Alert enrichment has been a typical area for SOAR, but you can easily enrich alerts with AI and investigate with AI.
SOAR remains useful today in response, from basic playbooks like isolating a node to more complex playbooks. Over time, this will change, but right now I see AI focused on alert enrichment and investigation, with SOAR focused on response. That's how I see the split.
Ahmed Achchak: At Qevlar, we have a conviction that the investigation part is where the dynamic aspect shines most. An investigation is never exactly the same. Even with the same alert, different customers and different contexts lead to different conclusions. You want something dynamic that can do the right pivots based on signals it picks up. That's difficult to replicate with static playbooks.
If you reach the right conclusions at the end of the investigation, the remediation is often deterministic. Once you know an account has been breached, it's evident you need to reset the password, for example.
Beatrice Francon: That's right. And it will evolve over time. There's a side note, though. You can still use AI on the SOAR piece for different purposes. For example, you can speed up the maintenance or creation of SOAR playbooks with AI, but that's another use case.
When asked what SOC practice MSSPs rely on too much and should rethink, Beatrice's answer was immediate: "Static detection. Static playbooks."
This theme ran throughout the conversation: the recognition that in a world of diverse clients and evolving threats, rigid approaches fail. Whether it's detection rules that don't account for client context, playbooks that can't adapt to unexpected scenarios, or migration projects that simply replicate past systems without reassessing risk, the static mindset is what holds MDR providers back.
For organizations evaluating MDR partnerships, Beatrice's advice was equally pointed: "Don't try and reproduce exactly your past detection. Take a step back and look at your risk." It's advice she admits isn't always followed, but it reflects a deeper truth: effective MDR requires letting go of the comfort of familiar patterns in favor of approaches that actually match current reality.
As MDR providers continue to scale, the winners will be those who've figured out how to combine efficient baselines with genuine contextual understanding, how to make AI transparent enough for regulated industries, and how to keep human expertise central even as machines handle more of the mechanics.
The message from Atos is clear: in a world of increasingly sophisticated automation, context is what separates effective detection from expensive noise.
🎧Listen to the full episode on Spotify
🍏 Listen to the full episode on Apple Podcasts