AI is having a real moment in security—but not in the way most slide decks suggest. A small group of companies is quietly turning AI from an experiment into a measurable advantage in how they detect threats, resolve incidents, and protect the business, while everyone else is still trying to prove a business case. This article pulls out the “tell me something I don’t know” bits from Google Cloud’s latest ROI of AI in security work and cross-checks them against broader research from McKinsey and BCG to separate signal from noise.
Wait, is AI in security actually paying off?
Let’s start with the short answer: yes—but mostly for a specific kind of organization. Google’s 2025 ROI of AI and ROI of AI in security studies, based on more than 3,000 senior leaders, show a distinct group of “agentic early adopters” who deploy AI agents in production and commit a big share of their AI budget to them. Among them, roughly three-quarters report getting ROI on at least one generative AI use case within the first year, and security is one of the standout domains.
Security leaders in this group report tangible wins: improved security posture for almost half of organizations, faster time-to-resolution for more than 60%, and fewer tickets for over half. These are not abstract “insight” gains; they show up as reduced operational drag on security teams and fewer days spent fighting fires.
The three surprises hidden in the numbers
Here are three things that challenge the usual hallway chatter about AI in security.
1. Security is becoming one of AI’s clearest ROI stories
The Google Cloud data shows that AI agents are now a top-three use case area, with security operations among the leading domains where they’re deployed. At the same time, analysis of AI ROI across domains finds that nearly half of organizations using AI report meaningful impact on security posture, including better threat identification, integrated response, and lower ticket volume.
2. Most of the enterprise-wide “AI value” is still modest
McKinsey’s 2025 State of AI work describes a world where AI is almost everywhere—and still underwhelming at the enterprise level. Only a small percentage of organizations report enterprise-wide EBIT impact, and those that do typically keep it under 5%, even though they may see more impressive gains in specific functions such as operations, marketing, and strategy. In that context, security stands out because AI-enabled reductions in breach likelihood, scale, or recovery time map directly to dollar savings.
3. Leaders are getting more than double the ROI of everyone else
BCG’s latest research on AI value finds that leaders—roughly 5–6% of firms—expect more than twice the ROI of the rest, translating into around a 5% reduction in addressable operational expenses and a 5% increase in addressable revenues from AI initiatives. These leaders generate over 60% of AI’s value from core business functions, not support activities, and increasingly include security in that “core” bucket.
So, where exactly is the ROI coming from?
When you map out use cases, a few patterns emerge that explain why some organizations see strong returns and others don’t.
Agentic SOC and incident response: AI agents now enrich alerts, pull in threat intelligence, summarize logs, and recommend response steps inside SOC workflows. Early adopters report less noise for human analysts, faster triage, and more consistent execution of response runbooks—key drivers of fewer tickets and quicker containment.
Security as part of wider AI-driven growth: Beyond security, Google’s broader ROI of AI work shows that 56% of organizations report business growth tied to AI and more than half see AI-driven productivity or revenue uplifts in the range of 6–10%. Security might not directly increase revenue, but it protects that growth by reducing the odds and impact of catastrophic incidents that can erase years of gains.
Core functions, not experiments: BCG’s data shows that over 60% of AI value for leaders now comes from core operations, sales and marketing, and R&D, with AI also supporting cost transformation programs where 5%+ savings are on the table. Security’s role here is increasingly to enable these AI-heavy functions to operate safely, which is why leaders are wiring governed AI into both offensive (growth) and defensive (risk) plays.
What’s getting in the way for everyone else?
If all of this sounds promising, the natural follow-up is: why isn’t everyone seeing these results? The research points to a few recurring roadblocks.
Spread too thin across pilots: Many organizations try dozens of AI experiments without a clear view of which ones tie to meaningful financial outcomes, leading to effort everywhere and impact nowhere. Both McKinsey and BCG find that high performers do the opposite: they pick a few high-stakes use cases and scale them aggressively rather than endlessly piloting.
Risk and governance running behind adoption: As AI moves into security and other sensitive areas, risks around data privacy, model security, and “shadow AI” become more prominent. IBM-linked analyses show that organizations with weak AI controls face higher costs when AI-related incidents occur, reinforcing the idea that governance is not a luxury add-on, but a core enabler of value.
Underestimating the change in how work happens: High performers invest heavily in redesigning workflows and roles around AI, rather than bolting AI onto existing processes. That often means shifting more than 20% of digital or transformation budgets into AI and operating-model changes, not just into models themselves.
If you only copy three things from the leaders…
For a fast-moving audience, here are three practical moves suggested by the data.
Pick one flagship security use case with a hard-dollar story: Start with something like AI-assisted alert triage, phishing response, or incident enrichment—places where reduced tickets, faster response, or fewer major incidents can be translated into avoided losses and reclaimed analyst hours.
Put agents in production, not in a lab: The biggest ROI shift happens when AI agents are embedded directly into SOC tools, ITSM platforms, or security workflows, with clear guardrails and access controls. That’s how early adopters are achieving fast ROI on at least one gen AI use case and moving from “experimentation” to “value realization” in under a year.
Measure ROI in cash, days, and tickets—not just “insights”: Track metrics like days of detection and containment avoided, tickets reduced, business downtime prevented, and the financial impact of incidents that were mitigated or avoided. Then compare these to your AI program costs to show payback periods your CFO and board recognize.
Sources and further reading
This article draws on data and analysis from:
Google Cloud: Beyond the hype: Analyzing new data on ROI of AI in security
Google Cloud: The ROI of AI report
McKinsey: The State of AI in 2025
BCG: Where’s the Value in AI?
IBM and Ponemon Institute: Cost of a Data Breach Report