If you don’t measure it, you don’t manage it. But—I hate to be the bearer of bad news—most companies are measuring the wrong DLP metrics altogether.
I’m Sanjay Seth and after 20 years in cybersecurity (from my early days as a network admin in 1993, slamming together voice and data muxing over PSTN, to battling Slammer as it spread across the world to now having my own security company) I’ve seen firsthand how challenging it is to measure the true effectiveness of a Data Loss Prevention (DLP) program. Also just returning from DefCon and still reeling a bit from hacking on hardware for the last week, but let’s keep it real and talk about something that is important everyday to security managers and analysts: effective DLP measurement.
Well, here’s the thing – DLP reports frequently resemble a car dashboard where only the RPMs is operational – the speedometer, hour meter and the temperature gauge are not functioning. Granted, it is nice to have “number of blocked incidents” or “total alerts generated” do I have any sense of what those numbers are like, espectially when they have no frame of reference, it’s like driving your car while blindfolded.
Metrics that traditionally is emphasized include:
But metrics that mean something are about understanding the impact:
When I was working with network muxes in the early 2000s, if the only thing I was concerned with was throughput, and I wasn’t also considering error rates, then I was flying blind into actual problems. Same with DLP. Quality beats quantity every time.
ROI on DLP? Now, that’s not going to be everyone’s cup of tea. Security budgets are always scrutinized, and it’s hard to prove the value of a product that is designed to keep something that didn’t happen from happening.
Here’s some framework I’ve gone by — based on hard numbers and a bit of intuition (yes, I trust my gut too, but data be my guide):
ROI = (Cost of Breaches Prevented + Operational Savings – Cost of DLP) / Cost of DLP 100%
Funny story — one time I assisted three banks with modernizing their zero-trust architecture and the CFO said the question that mattered most to him was straightforward: “Show me the bottom-line savings. Fortunately, we had some projected $2M breach avoidance over the next 12 months that silenced the naysayers.
Not all data loss events are “fire”! in the same way. Studying the incident response itself is insightful:
Time is everything. I’ve seen countless orgs where they care about detection alerts, but they don’t care about the torturous days before their team got around to responding.
Look for bottlenecks in these numbers. Perhaps your SOC team is swamped or your incident playbooks are past their prime.
Pro tip: add a KPI dashboard with these numbers updated every month. Here are some of suggestions for thinking about your dashboard:
KPI | Target | This Month | Trend |
---|---|---|---|
DLP Incidents | < 50 | 42 | ↓ |
False Positive Rate | < 10% | 14% | ↑ |
MTTD (hours) | < 1 | 2.5 | ↑ |
MTTR (hr.) | < 4 | 3.2 | ↓ |
User Training Rate | > 90% | 85% | → |
Dashboards like these provide managers with a real-time view — which is light years beyond mere static monthly reports.
Cybersecurity isn’t set-and-forget—especially with DLP. After all, attacker techniques change — so should your defenses and the way you gauge them.
Continuous improvement means:
As a network admin working Slammer, many were reactive (i.e. panic). We now have to be who we said we are., we have to build continuous improvement into our DNA.
Here’s a rough draft of a template I prepare for monthly DLP reports that focus on ongoing improvement:
This format ensures everyone — tech teams, the C-suite — stays in sync and up to date.
I bet you are measuring some metrics now. But what is the maturity of your program’s metrics?
Most orgs are level 1 or 2, which really isn’t a surprise – so many companies are still implementing DLP as a checkbox tool.
Here’s my hot take: your shiny tool matters less than you think it does, regardless of whether it says it’s AI-powered (I know, I know, I’m also skeptical) — if you aren’t set up with solid metrics and a team ready to use that data, you’re just spinning wheels.
Metrics are the lifeblood of your DLP engine. Without them, you are idling in neutral.
Picture your DLP like tuning a classic engine. You’re not just staring at the oil level; you’re examining spark plugs, compression, emissions — multiple inputs to keep performance high and prevent a breakdown.
I hope these ideas are useful as you try to build a DLP metrics program that isn’t just going through the motions but actually makes a difference. Help yourself to the templates and dashboard wireframes to jump-start your next round of reporting.
SecurityMetrics DLP KPIs DataAnalytics ContinuousImprovement