- Executive Offense
- Posts
- CISO Tea Time
CISO Tea Time
The Top 4 worries and discussions from Q1
Hey everyone!
Over the last few months, in addition to tech work, I’ve been having a ton of conversations with sec leaders. A lot of these happen at cons like RSA (at offsite events), but also there are a few really good Slack channels and text groups that i’m a part of still. One of the things i’m always trying to distill with these friends is… what are the core top 4 things that are actually getting enterprises breached? What are the outsized risks that everyone agrees on? At least for Q1, this newsletter is collection of musings from these conversations.
The enterprise vs. appsec budget tug-of-war
Most big orgs I see still run these as two fiefdoms. Enterprise security owns identity, endpoint, email, SOC. AppSec owns code, pipelines, and the production attack surface. Sometimes separate leaders, separate tools, separate budgets, separate war rooms.
The problem is that none of the threats actually respect that line anymore. The attack starts in one fief and ends in the other, and whichever side is underfunded is the side that gets hit.
Four conversation threads kept coming up over and over in Q1 discussions. One of them is really two sides of the same coin. In my honest read, these are where the real loss events are happening right now, and where CISOs should be fighting hardest for headcount and applying technical defenses.
(Sponsor)
Leave Threat Actors Hungry
Threat actors don't brute force their way in. They buy stolen credentials and walk through the front door.
Flare monitors 100M+ stealer logs, thousands of Telegram channels, and hundreds of dark web marketplaces to surface your exposed credentials before attackers can weaponize them. With Entra ID and Okta auto-remediation, compromised credentials are revoked the moment they appear. Stop leaving crumbs.
1. The credential problem (leakage, stuffing, and phishing are one threat now)
This is the most unsexy and known. It is also the most expensive category. A lot of orgs are losing ground here because they treat the pieces separately. They're not separate. They're one attack surface with different entry points.
On the leakage and stuffing side, infostealer logs are a flood. Employees reuse passwords on personal apps, those apps get breached or the employee gets stealer'd on their home machine, and two weeks later the same creds are sprayed against every corporate login portal you own. Single-factor VPN appliance, dev tool that forgot MFA, some SaaS nobody inventoried. That's the door.
On the phishing side, adversary-in-the-middle kits have made session-cookie theft somewhat easy. Your MFA rollout from 2023 doesn't save you the way you thought it did. Push-bombing, TOTP relay, cookie replay are the default playbook now.
And both of these now feed into the non-human identity problem. Once an attacker is in as a real user, they pivot to OAuth tokens, service accounts, and SaaS-to-SaaS integrations. That's the Midnight Blizzard playbook. That's how Snowflake happened. The humans were the entry. The machine identities were the damage.
These should not be three programs. They're one. What modern looks like right now:
One team owns identity threats end-to-end. Not split between IAM, the SOC, and email security. Whether you call it ITDR or not, someone's job is to watch stealer logs, OAuth grants, session anomalies, and AitM kits as one problem.
Stealer-log / dark-web intel piped straight into IAM with forced resets.
Phishing-resistant MFA required (FIDO2 hardware keys, passkeys) for anyone touching production or sensitive data. Not "enabled as an option."
Device-bound session credentials on the roadmap. Chrome's DBSC is shipping and it breaks cookie replay in a way session-cookie theft can't route around. If you're doing a browser-security RFP in 2026 and DBSC isn't in the conversation, the vendor is behind.
Browser-layer detection catching AitM kits before the credential leaves the tab.
OAuth token and service account inventory treated like privileged access. Rotate, monitor, alert on anomalous consent grants.
Session reauth on sensitive actions. Short lifetimes, conditional access tied to device posture.
If your identity team is still shipping "we rolled out MFA" as the answer to credential attacks, they're two years behind. I'm not recommending any specific vendors here (except Flare… they are amazing) but you can Google at on of those features and find the big players. The only problem I really have here is that many of the solutions in the vendor space only tackle one area of all of these identity problems and solutions that combat the credential compounding. It really frustrates me that they all tackle this individually, because then you end up spending a ton of money as a security leader on licensing on tools that are not even integrated together. You have to do engineering work to do that yourself.
2. Edge appliance exploits (Cisco, Fortinet, Palo Alto, Ivanti, F5)
If you've been watching CVEs for the last two years, you've seen this curve. Attacks against infrastructure web portals are not slowing down. They are the new phishing, in terms of "how sophisticated threat actors get initial access without touching the endpoint."
Ivanti Connect Secure. Fortinet FortiOS. Palo Alto GlobalProtect. Cisco ASA. F5 BIG-IP. Every one of these has shipped critical auth-bypass or RCE chains in the last 24 months, and nation-state crews have been ready with exploitation within days.
Here's the thing. The modern answer isn't "patch faster." It's get rid of the appliance.
ZTNA has matured. Identity-aware, application-layer access replaces the classic VPN appliance without exposing a juicy HTTPS portal to every APT with a shodan subscription. The orgs I talk to who have already done this migration are sleeping better than the ones still patching Ivanti on a Thursday night.
For appliances you genuinely can't replace yet (load balancers, firewalls at the physical edge):
CISA KEV as your patch SLA trigger. KEV listing means emergency cycle, not quarterly.
Management interfaces never on the public internet. This is still how people get popped.
Assume-breach segmentation behind the appliance. If Fortinet pops, what does the attacker actually reach?
Put them in the AppSec threat model. If it's an HTTPS portal exposed to the internet, it belongs there whether it shipped from a vendor or from your own dev team. The attacker doesn't care who wrote it.
Dedicated owner whose job is tracking vendor CVEs, testing patches, running emergency response. Because the next Ivanti-style drop is coming, and "it's the network team's problem" is not a plan.
The problem with these infrastructure web application portals is that they were written in the past era of application security. Most of them use antiquated things like CGI scripts or PHP functions that are still rife with RCE bugs like local file includes, server-side request forgery, and authorization bypass. They were made in an era where app sec really didn't or hadn't matured yet. Applying strong identity and authorization and proxying in front of these things, if they have to be your source of management, is a must.
3. Dependency confusion, and the broader supply chain
Alex Birsan's dependency confusion research landed in 2021 and the attack pattern has not gone away. If anything, it has quietly gotten worse because the ecosystem got bigger, CI/CD got more complex, and internal package names leak in more places than most teams realize (Sentry stack traces, public npm configs in GitHub, job postings listing internal tooling, error messages in JS bundles shipped to browsers).
Then AI coding tools showed up and poured gasoline on it. Copilot, Cursor, Claude Code, all happy to suggest npm install some-plausible-sounding-package from a hallucinated name. Attackers are already squatting on those hallucinated names. If you think your devs aren't accepting those suggestions, check your lockfiles.
The modern defense stack is a lot richer than it was in 2021:
Private registries as the source of truth, with scope-locked public registry access.
Install-time malicious package detection. This category actually works now and catches the stuff the registries don't.
Build-time verification that internal package names don't resolve to anything public.
SLSA provenance and sigstore/cosign signing on anything you produce. SBOMs on anything you consume.
GitHub Actions pinned to commit SHAs, not tags. The tj-actions incident made this table stakes, and a surprising number of orgs still haven't done it.
Secret scanning on commits AND on CI output. Tokens leak into build logs constantly.
Treat the private-package namespace as a crown jewel asset. If an attacker can register your internal package name on the public registry, that's an RCE in your CI.
Most CISOs I talked to this week admitted they don't know if their own org is vulnerable to the basic dependency confusion flavor right now, let alone the AI-suggestion variant. That's the canary. If you don't know, the answer is probably yes.
4. Shadow AI (it's the new shadow IT, but worse)
Every CISO conversation I had this quarter eventually ended up here. And every one of them sounded a little defeated about it.
Here's the shape of the problem. You rolled out an enterprise AI tier (Copilot, Gemini for Workspace, Enterprise ChatGPT, Claude for Work). You wrote a policy. You maybe blocked a few domains. You told yourself the AI problem was handled.
Meanwhile your developers are pasting proprietary code into whatever AI gives the best answer this week. Your sales team is uploading customer lists to a Chrome extension they found on ProductHunt. Your legal team is summarizing contracts in a free "GPT" wrapper that's white-labeling god-knows-what API on the backend. Your PMs are running autonomous agents against production Jira with their personal tokens. Nobody told you any of this.
Shadow AI has a few distinct flavors and they need different answers:
Data and IP exfiltration into consumer AI. Engineers pasting source code, prompts containing secrets, customer data, contracts. The scary version is the Chrome extension or free GPT wrapper that logs or resells what it sees.
Enterprise AI over-sharing. Microsoft Copilot and Google Gemini for Workspace inherit your existing ACLs. If your SharePoint is a decade of stale permissions, Copilot is going to help an intern find the board deck. Rolling out Copilot without an ACL cleanup project is a self-inflicted breach.
Agent sprawl. Employees running autonomous agents (Claude Code, Cursor agents, custom GPTs, Zapier AI, n8n flows) with real credentials against real systems. Nobody inventoried them. Nobody scoped their permissions. The agent has more access than the employee realizes.
Prompt injection against internal tools. Any AI feature that reads email, tickets, PDFs, or web pages is reading attacker-controlled content. Indirect prompt injection is no longer theoretical. It's the new SSRF.
AI-suggested supply chain attacks. This loops right back to section 3. Your devs are installing what the assistant recommends.
What modern defense looks like:
Visibility first. You cannot defend what you can't see. Network, DNS, and browser-layer telemetry to find what AI tools are actually in use. Most orgs find way more AI surface in the environment than the CISO expected on first scan.
Sanction a short list, block the worst, accept the rest will exist. Blanket blocking drives usage underground (onto phones, personal devices, mobile hotspots). A sanctioned enterprise tier with a real DPA plus a small approved-extensions list gets you most of the way.
Browser-layer DLP is where this actually gets solved. Endpoint DLP doesn't see into a ChatGPT paste. A browser-layer control that knows "this is a sensitive SharePoint doc being pasted into an AI tab" is the only tech that actually works for this use case.
ACL cleanup before Copilot rollout. Assume the Copilot launch is a pen test of your permission model. Run it before the attacker does.
Agent inventory and scoped credentials. Every autonomous agent gets an ID, an owner, a scope, and the least-privileged token that can do its job. No personal tokens on corporate agents.
Engineer training that's actually technical. "Don't paste secrets" is too vague. Show them examples. Show them what an exfil-extension looks like. Devs respond to specifics.
Here's the real frustration though. Even when an org does sanction an AI tier, the security controls around that sanctioned tier are wildly underfunded right now. Browser-layer DLP, agent inventory, ACL cleanup, prompt injection testing, red-teaming the Copilot deployment, none of these line items make it into the AI rollout budget. The money goes to licensing and enablement, not to the controls that keep the rollout from becoming a breach.
And that assumes the org even understands that sanctioned AI needs security controls in the first place, which, frankly, is laughable in a lot of places. Plenty of shops are treating Copilot or Enterprise ChatGPT like they treated the last SaaS rollout. Turn it on, pick a champion, call it a transformation. Meanwhile the thing has read access to every poorly-permissioned document in the company and no one has run a single adversarial test against it.
Honestly, this is the one where I think CISOs are furthest behind the actual threat. The AI adoption curve inside enterprises is moving faster than any security category has moved in ten years, and the defense tooling and the defense budget are both playing catchup. Expect this to be the dominant conversation for the rest of 2026.
Where the budget actually goes
Taking all this together, the picture that kept coming up on these calls is that the orgs getting the fewest surprises are the ones who stopped treating AppSec and enterprise security as two separate fiefdoms.
The credential story starts on a personal device or in an inbox and ends at a production API or an OAuth token. The appliance exploit story starts with a vendor CVE and ends inside the corporate network. The dependency story starts in a dev's IDE (or their AI assistant) and ends in prod. The shadow AI story starts in a browser tab and ends with your crown jewels in someone else's training data or a Copilot response an intern didn't need to see.
Every one of these spans both fiefdoms. The budget should too.
If I had to write the priority stack for a CISO in 2026 it's actually pretty simple. Put one team in charge of every flavor of credential attack, and make hardware-key MFA non-negotiable for anyone near production. Get rid of the VPN appliance where you can, and treat the ones you can't kill like production web apps (because that's what they are). Actually test your build pipeline for package attacks, and lock down what it's allowed to pull in. And fund AI security as its own line item with its own headcount, not an afterthought bolted onto the Copilot rollout.
Everything else is a rounding error next to those.
Outro
That's it for this one. If you're a CISO, borrow whatever is useful for your next priority conversation. If you're a hunter or a red teamer, these are where the ROI is right now on real engagement.
Thanks for reading, and as always, feel free to reply or hit me up on Twitter if something in here sparked a thought.
-Jason
