- Executive Offense
- Posts
- 🔴 Executive Offense - Arcanum's AI Security Scaling Rubric
🔴 Executive Offense - Arcanum's AI Security Scaling Rubric

Hey everyone!
For the last year, Arcanum has been delivering a consulting service called our AI Security Scaling Assessment. Our approach is pretty straightforward: we interview security teams to find out where AI could meaningfully scale their work. We don’t go in with a prescribed toolset. Instead, we surface real, internal use cases and then dig into the technology stack and vendors already in place to identify where AI can make the most impact.
Today, I want to share the general rubric we've developed from these engagements. It maps how most organizations progress in their adoption of AI within security. It’s not exclusive to security—you could apply it to any business function—but for those of us working on scaling security teams, it provides a practical lens for planning and execution.
I thought it would be helpful to share with practitioners and leaders like yourselves!
Scaling Security Teams with AI: A Three-ish Stage Model
Stage 0 - Adoption
The first stage, which is not even listed on our graphic, involves winning the hearts and minds of the organization to adopt AI. Many organizations are still grappling with privacy concerns related to AI adoption and its implications for efficiency gains. Using state-of-the-art models from cloud vendors means you're working with software-as-a-service companies providing AI models. This raises privacy and security concerns for many security groups. In addition to privacy and security issues, you also have security practitioners who see AI as a fad and are overly hesitant to adopt the technology.
For privacy concerns, we’ve found that many orgs already have strong contractual frameworks with Microsoft due to OS licensing, M365 usage, or Azure cloud deployments. This often opens the door to Azure AI Foundry, where OpenAI’s models can be run under existing agreements. We usually see a noticeable 10–15% efficacy boost when using these higher-quality, foundational models, particularly in real-world security workflows that rely on nuanced natural language reasoning.
As for the trust issue, this is a leadership problem. It’s their job to communicate that AI isn’t here to replace people, it’s here to scale them. And during assessments, this comes through quickly. If you know what to ask, you’ll surface a pile of repetitive tasks, reporting overhead, and friction points that are easy AI targets.
(Sponsor)
Navigating M&A: What every security leader needs to know

M&A is exciting – new products, new colleagues, new possibilities. Often overlooked, cybersecurity can make or break the success of the entire deal. Acquirers often face fragmented systems, different security policies, and new vulnerabilities. These issues introduce real security risks.
On July 17th, Dave Lewis, Wendy Nather, and Kane Narraway draw on the collective experience of 30+ M&As to examine the security implications of M&A, outline strategies for mitigating risk, and demonstrate why security architecture must be embedded in the due diligence period.
(Note from Jason): This webinar from 1Password is one I’m personally attending. M&A is a significant source of risk, and this panel features some of my favorite and most respected security leaders. Definitely check this one out.
Stage 1: Task-Level AI Assistance
Scope: Single, narrow tasks
Goal: Improve individual output and remove bottlenecks
Audience: Analysts, engineers, first-line teams
Stage 1 is the entry point, where AI shows up as a helpful assistant, not a system. These implementations are usually purpose-built copilots, bots, or new tool features that focus on one task at a time.
They’re built around simple APIs calls or exposed through tools like Microsoft Copilot webapps. Others are new “AI” pop-out panels in already used security dashboards.
Typical examples we see include drafting and formatting reports, combining data, converting data formats between tools, assisting with detection rule logic, or helping with lightweight risk scoring. The point here is task support, not full workflow automation. These solutions aren’t connected to other systems and aren’t built to scale, but they reduce cognitive load and increase speed, especially on repetitive tasks.
When an organization has to build stage one solutions themselves, or implement things like copilots, it's really important to remember that system prompting is the magic behind the efficacy of many of these implementations. A good system prompt for a single task, which includes a structured methodology for solving the task and examples for good solutions, can raise the output of that AI model call substantially. For every implementation, we typically provide a system prompt methodology that we have developed at Arcanum for each bot implementation.
Stage 2: Domain-Level Automation
Scope: Automating repeatable workflows in a single domain
Goal: Reduce manual cycles and increase consistency
Audience: Functional teams—SecOps, vuln management, identity, cloud security, etc.
Stage 2 is where AI starts to live inside actual workflows. You’re not just getting help on a one-off task anymore; you’re designing systems that can trigger on events, enrich data, suggest responses, and even generate tickets or documentation automatically.
Examples here include alert triage, threat intel parsing, vulnerability report summarization, automated remediation suggestions, or regularly scheduled report generation. The AI has some context and often has access to structured data via a RAG pipeline. It’s embedded into a system and usually produces outputs that hit ticketing systems, dashboards, or shared folders.
The main difference from Stage 1 is an initial sense of automation, repeatability, and functional awareness. The AI knows where it’s operating and what it’s supposed to do, even if it’s still contained to one domain.
Many of the improvements in Stage One come from emailing, ticketing, and automating the intermediate “glue” parts of processes. This also means that staff are no longer copying and pasting from chatbots, which is a significant workflow enhancement.
Additionally, the tech stack here incorporates AI agents and Retrieval-Augmented Generation on vector stores, along with basic integrations with existing security tools. This enables AI systems to gain even more context to support various identified use cases. A solid data implementation that complements AI, such as a vector store, can boost effectiveness and problem-solving by another 10 to 15 percent. Now, you have excellent system prompts for agents and an additional boost from RAG.
Stage 3: Org-Wide Automation and Insight
Scope: Cross-domain orchestration and insight
Goal: Use AI to detect patterns and generate insight across silos
Audience: The entire security org
Stage 3 is the advanced state. This is where AI acts as a connective layer across your entire security stack. It's not just helping with one task or even one workflow—it's enabling cross-domain insight and orchestrated automation.
In this stage, AI agents can operate across identity, endpoint, cloud, appsec, and more. They pass context between each other, act on behalf of teams, and generate investigations that would have taken days of human correlation. You start getting answers to questions like, “What dormant high-privilege accounts still have access to production systems?” pulled from multiple systems in seconds. Another implementation many companies are aiming for in stage three is the partial or full automation of their vulnerability management pipeline.
You also begin to use memory, reasoning, and other advanced features of state-of-the-art models. The agent's architecture shifts from being task-specific to planning and executing. Each agent has significant access to data, APIs, and tools to bring into context when managing a domain workflow.
At this stage, the user interface of these systems is flexible, either as a single pane of glass Chatbot or an API that can be integrated into a Slack call, or any other consumption model. The final output remains the same: staff can ask natural language questions equipped with contextual business knowledge and receive insightful answers much faster than they could before.
Achieving this level of integration requires architectural work. You need a centralized security data lake, shared vector stores, model context protocol, or agent-to-agent communication infrastructure. Once you reach that stage, AI is no longer just a tool; it becomes an operational capability.
/ Outro Thoughts
This is just a quick overview of our model for scaling security teams with AI. There are several intermediate steps between the stages that are not discussed here. Additionally, I believe it is very important to note that 98% of organizations are currently in stage zero or stage one. Even the efficiency gains from stage one implementations can significantly impact teams. The journey is not straightforward and is often hindered by internal politics involved in technology implementation. The goal should be to reach stage two implementations for any security team within the next three years, and then move into stage three over the following five years. Implementations at stage three will become easier as new protocols are developed or fine-tuned for the models themselves, the tools that support them, and as security vendors adopt these advancements.
That’s it for now! I hope you all have a wonderful weekend! 😀󠅘󠅙󠅔󠅔󠅕󠅞󠅔󠅑󠅤󠅑󠄢󠄠󠄢󠄥󠅘󠅙󠅔󠅔󠅕󠅞󠅔󠅑󠅤󠅑󠄢󠄠󠄢󠄥