- Executive Offense
- Posts
- 🔴 Executive Offense - RSA Week WrapUp, Sitting Down with Sam Altman, ++
🔴 Executive Offense - RSA Week WrapUp, Sitting Down with Sam Altman, ++

Well, that was a doozy…
Arcanum spent 10 days in San Francisco doing workshops and presentations.
Now, if you want to see a raw stream of consciousness conversation between two very handsome men, you can check out a video I recorded with Daniel Miessler in his command center while I was in town:
We talked about our overall takeaways from the week and a bunch of ideas around AI and ML topics.
BsidesSF / RunSybil / DryRun / RAD Security AI Events
Throughout the week we had several events that we presented are AI pen testing methodology and prompt injection taxonomy to. Both were very well received!
The slides are included below for subscribers!
(Sponsor)
Agentic AI SOC Analyst for Autonomous Alert Investigations

Triaging and investigating noisy security alerts drains time, attention, and morale. It's slow, error-prone, and leads to mounting backlogs. Prophet Security’s Agentic AI SOC Analyst takes on this work autonomously, delivering full-context investigations at machine speed and precision.
Prophet AI improves SOC efficiency by cutting time spent on low-value alerts, expanding investigation coverage, and reducing the risk of missed threats. It acts as a force multiplier, augmenting analysts so they can move faster and focus on what matters.
See how Prophet AI can transform your SOC: Request a Demo
(Note from Jason: while working throughout the space of AI in cyber security, I am continually on the lookout for ways to scale blue teams. Right now, there's not many doing it at the level of Prophet. I highly recommend you check them out.)
The OpenAI Security Research Conference
My highlight of the week was definitely the open AI security research conference. The con was about 100 invitees from both academia and industry. It was a full day of presentations on automating security, workflows in both the red and blue team domains as well as sub topics like better eval for models and even some private announcements on the future of models at open AI.
One of the coolest things was that we got about an hour of Sam Altman's time to ask him questions about the intersection of cyber security and generative AI.

Prompt injection, solvable?
One of the questions that Dan asked Sam was that he had stated several years ago that he thought prompt injection was a solvable problem and does he feel like that's still the case in 2025. While the conference was not recorded Sam talked around the idea that he thinks in the next couple years that we can get to 95% solved but there's always going to be some level of creative bypass from experts like us.
Technical domain skill atrophy?
One of the questions I asked was around the atrophy of technical skills. As I do more and more coding via my AI-assisted IDE, I’ve noticed that I have a hard time remembering simple Linux commands that I used to know by heart. That scares me a bit, as the core of my engineering knowledge was built on *nix. So I asked Sam his thoughts on the topic.
His answer was the argument that over generations and decades, we've definitely lost the ability to use things like the abacus or similar tools—and that we don't use punch cards or anything like that anymore either. While I was there, excited and pumped up, I kind of just took that argument and reduction at face value.
When I got home, though, I definitely thought about it a little bit more.
The way I think this time in history is different from us just outgrowing things like punch cards is that we’re rapidly seeing whole engineering disciplines go away. It’s not just proficiency in a few technologies here and there as we phase them out—it’s whole domains of technical knowledge being abstracted away.
While I know personally that abstraction leads to innovation in many parts of the web world, I still worry that this is an unprecedented disruption to software engineering and security engineering.
Does my worry overtake my excitement as we enter a golden age of new software? No… but if I were to reflect on these same thoughts six months ago, I would have been more optimistic for humanity. Right now, I’m a little bit less.
What security thing keeps Sam Altman up at night?
One of the questions asked was: What keeps Sam Altman, CEO of OpenAI, up at night when it comes to security? You can see Dan and me talk about this in the YouTube video above.
Sam’s answer was fascinating. He talked about how, eventually, models will live locally on our phones, maintaining a running database of information pulled from every app we use. These models will begin to capture the ethos of who we are—our behaviors, preferences, patterns—so that personal AIs can truly become assistants, capable of interacting with the world on our behalf.
This is exactly what Dan has been writing and speaking about for the past 12 years, across numerous blog posts and talks. So when Sam started describing this vision, I was literally poking Dan on the shoulder—he’d been predicting this exact trajectory.
But then came the darker side of the insight: Sam noted that losing the data that builds this context—or even worse, the model itself—could be catastrophic. Because the model would contain such a deep, intimate understanding of you, down to the most minute combinations of traits and behaviors that make you… you.
It’s the ultimate personal data breach.
New unfettered security models?
Since it was a room full of hackers, it was inevitable that someone would ask whether OpenAI plans to offer a security researcher–focused version of their models.
Right now, when you try to use OpenAI models for security research—like writing malware proofs-of-concept or crafting exploits—some of the more safety-tuned models will block the requests. While these refusals can often be bypassed with clever prompt engineering, it’s still a pain and slows down workflows. So naturally, someone asked if OpenAI would be willing to release a version of their models without refusal mechanisms for vetted researchers.
Sam’s answer was essentially yes—but with no timeline and a clear caveat: they still need to figure out how to verify and gate access appropriately.
Interestingly, this conversation picked up again later that week in a Signal group I’m in, with Dave Aitel jumping in. According to him, this might actually happen soon—specifically with OpenAI’s “o3” frontier models. Again, no concrete dates yet, but movement is happening behind the scenes.
In addition, Dave gave a private presentation about a Skunk Works project inside OpenAI, codenamed Aardvark. I initially wasn’t sure I could talk about it due to the heavy NDAs around attending the con—but several other attendees have already posted about it on their blogs, so I think it’s safe to share now.
Aardvark is a specialized model trained on exploit datasets, security analysis content, and reverse engineering knowledge.
It has a similar feel to Google’s recently announced Sec-Gemini, but with deeper focus on real-world offensive and defensive TTPs. Current open security-tuned models on platforms like Hugging Face leave a lot to be desired—they lack depth, context awareness, and specialized corpus training.
When frontier-grade security models like Aardvark or o3 get released (even in a gated fashion), it could mark a huge shift in how we perform automated analysis, exploit research, and tool development.
/ Outro
There's absolutely too much going on to include all of the updates from the trip and one newsletter… as I parse through my notes I definitely release some more stuff in the next couple weeks that I thought was cool.
We managed to snag this awesome pic of some of the homies at the OpenAI Con:

That being said, I hope you all have a wonderful weekend! 😀󠅘󠅙󠅔󠅔󠅕󠅞󠅔󠅑󠅤󠅑󠄢󠄠󠄢󠄥󠅘󠅙󠅔󠅔󠅕󠅞󠅔󠅑󠅤󠅑󠄢󠄠󠄢󠄥