While security teams remain overwhelmed with alerts, the real damage comes from threats they never see. In this conversation, Shane Shook, venture partner at Forgepoint Capital and trusted advisor to innovative security startups, discusses why the industry's obsession with false positives is creating dangerous blind spots—and how AI can finally help fix the false negative problem.
With over 30 years of experience in cybercrime investigations, forensics, and security consulting, Shane brings a unique perspective on where detection strategies are failing and what organizations can do about it.
In this interview, Ahmed Achchak, CEO and co-founder of Qevlar AI, and Shane explore:
→ Why false positives get all the attention while false negatives slip through the cracks
→ Where in the kill chain organizations are missing the most critical detection opportunities
→ How AI can provide context at machine speed to reduce both false positives and false negatives
→ Practical steps CISOs can take to shift detection left in 2025
For security leaders building detection strategies and evaluating AI-powered solutions, this conversation provides essential insights into addressing the blind spots that matter most.
Ahmed Achchak: Most security teams today are overwhelmed with alerts, but the real damage comes from threats we don't see. Why are false positives still so prevalent, and why do we talk so little about false negatives?
Shane Shook: I'm very happy you're focusing on false negatives. Too much of our industry focus has been on false positives at the expense of false negatives.
The reason false positives are still so prevalent is fundamentally that SIEM, SOAR, XDR—they rely on other people's intelligence. They rely on third parties or intelligence of past activities that may or may not have relevance to the victim today. If you're a real estate company and you're picking up alerts related to banking cyber threats, it might simply be that some OVH IPs are correlated to detections because of some time server lookups or coincidence of re-registration of domains.
It's the correlation more than the causation that creates so many false positives today. And false positives are flashy because you can associate them through bad research discipline to what other people have said is a problem.
What that leads to is an expenditure of limited resources, particularly in the SOC, which is a cost center in businesses. That expenditure of those limited resources to tuning false positives results in downtuning of signals that otherwise might catch actually applicable signals—and that creates false negatives. The fear of false positives has created false negatives.
Ahmed Achchak: That's exactly what we see with many of our customers: everyone focuses on false positives, and sometimes there's a voluntary or involuntary drastic shift in detection rules. We overtune them to reduce false positives, which leads to missing stuff that can matter.
Shane Shook: Over 20 years of assessing and advising on SIEM and generally detection programs for companies around the world, I've noticed they're fundamentally two-dimensional: time and place—time and identifier, IP address, DNS, host name.
But there's an important third dimension, which is context. The context itself comes in two dimensions. One is the utility of the resource that the alert relates to. So is it a finance computer or is it an HR credential? What's the resource context?
The second is what's the context of the activity, particularly if we look at the kill chain. Is it attack phase or is it breach phase? If it's attack phase and it's incoming, are there corresponding breach indicators that may not be the same as the attack phase indicators, but they correlate in a way that raises alarms? It's that context that matters between those dimensions.
Ahmed Achchak: According to your experience, where in the early kill chain are organizations missing the most detection opportunities today?
Shane Shook: I think it's fundamentally in the delivery phase. We've got good email detection with MIMECAST and Abnormal Security and those types of solutions. We've got good email protections with DMARC and other things like that. So we're generally managing the traditional phishing route of delivery of malware.
We've got relatively good detection on inbound communications with IDS rules and firewall rules. But as we've noticed over the past couple of years, the third-party dependencies that we have on our partners, our trusted partners, our trusted vendors, and our employees is the preferred mechanism for delivering threats now. It's much harder to defend against that.
If you get an email from one of your investors that says, "Open this PDF and give me your thoughts," you're more likely to do that because that's a trusted relationship, independent of whether your email client says don't open that attachment. The delivery phase is where we see the most false negatives occurring, and it's because of this issue of trust. That's why there's been a reaction by industry toward more "zero trust" that unfortunately gets defeated by the human factor.
Ahmed Achchak: From your perspective, how can AI be applied to reduce false negatives without creating a massive amount of false positives? It's always about finding the sweet spot between the two.
Shane Shook: What AI can do is, at machine speed, actually discern the labels for features of use as opposed to at analyst speed. An L1 or L2 analyst in a SOC takes on average four hours to evaluate a ticket. That's after the ticket's arrived through all of its processing. So we're talking typically one to two days before some anomaly occurs—and this is in general business, not Bank of America.
With agentic AI, imagine an endpoint equipped with LLM intelligence to perform routine lookups on use factors and functions of that user's interaction with systems and resources. That baseline is more discrete, it's more immediate, and because it's discerned from data science, it's more likely to return a higher fidelity alert than a lower fidelity context for review.
One thing I really appreciated about Qevlar's approach is that very often L1 and L2 SOC analysts don't have access to the more critical resources in the victim estate to evaluate these additional contextual dimensions of an alert. They're limited to what's in the ticket and what may be accessible through a utility like Sentinel or something. That's not a limitation of AI if it's implemented more intelligently.
With Qevlar, for example, you get the intelligence of the entire system, not the limited resource intelligence of an L1 analyst. Imagine having me, 30-plus years of experience acting on an L1 alert. That's what you get with a platform like Qevlar that you'll never get out of an L1 SOC analyst.
Ahmed Achchak: What role do you think context can play for accurate early detection, and how can AI help connect the dots faster?
Shane Shook: Statistical AI has proven for 20 years the benefit of anomaly detection through random forest, for example, to identify patterns that are not visible to an observer at a particular point in time. But more generally, AI can produce context in time that takes many times longer for any level of analyst to produce. It's a combination of both experience of the analyst as well as access that the analyst gets in time and privilege to the resources they need to investigate the context of that alert.
If we involve these other dimensions—not just the two-dimensional time and place of the activity, but also the context of the user by their rights, by their privileges, by baseline activity indicated by their historical use of a resource—then if a phishing email actually is executed, that's where AI through reinforcement learning with some human feedback can improve the baseline for detections.
Ahmed Achchak: If you're advising a CISO today building a detection strategy for 2025 or 2026, what would you say is the single most important step to shift detection left in a practical way?
Shane Shook: I wrote a blog at ForgePoint called the ABCs of Cyber because many organizations can't comprehend the MITRE ATT framework—14 points of intelligence or process are too difficult to evaluate. So I broke it down into three phases: Attack, Breach, and Compromise.
The best way to shift left is to reduce the dwell time, which means focusing on the priorities. Breaches should always be a higher priority than general attacks. Breaches from experience represent one to three percent of attack signals on an aggregate basis over time. And it's breaches that matter because they'll determine the extent of the compromise.
To successfully shift left, an immediate step is to focus more intelligently on what's going out of your network—those are breach indicators—as opposed to what's coming into your network. What I find, unfortunately, is most TIP, SIEM, and SOAR programs are focused on attack indicators and not breach indicators.
As organizations continue to struggle with alert fatigue and the constant pressure to reduce false positives, Shane's insights reveal a critical blind spot in current detection strategies. The focus on what's coming into networks, rather than what's going out, represents a fundamental misalignment with how modern threats actually operate.
The promise of AI lies not in replacing human analysts but in providing them with the context, speed, and comprehensive visibility they need to make better decisions faster.
🎧 Listen to the episode on Spotify
🍏 Listen to the episode on Apple Podcasts
Check out Shane's book "Cybercrime Investigation Body of Knowledge"
Read Shane's articles on Forgepoint's blog