Executive Offense Issue #1

Atomics, Tabletops, ChatGPT Hacks, Hardcoded Secrets, oh my!

EO is a security newsletter that focuses on the intersection between offensive security and security strategy. Sometimes hacker-ish, sometimes CISO-ish. Very blazer over the t-shirt type of vibe…

Welcome to Executive Offense #1. Enjoy!

📰 News + Analysis

This incident reminds me of a few things that are important for security leaders:

  1. Have your PR & Legal ready with templates for public and internal incidents. It will make these situations infinitely faster, make them comfortable with the processes, and know what to expect during incident phases.

  2. Email in many parts of the world is considered PII, so you must disclose at least to your regulators (your legal should be all over this with you) and possibly to your users. It’s great to tabletop some minor incidents like this with them a few times a year. You won’t regret it.

  3. Love your lawyers 😉 

OpenAi + ChatGPT Hacks

I’ve been monitoring the AI and security space as much as I can lately. This week OpenAI ChatGPT was subject to FOUR separate security issues.

The first was using known and old vulnerable libraries in their app. Andrew Morris Shows below:

Next was a 9-hour outage caused by a “race condition” bug in Redis asyncio.

A friend of mine, Nagli, also discovered an account takeover bug this week using web cache deception. Read the AWESOME technical thread:

The last was a hack using HTTP response tampering by my friend rez0.

By tampering with HTTP responses, he gained access to what looked to be early access, developer, plug-ins in the plug-in ecosystem.

So what are the learnings here? I have a really hard time believing OpenAI didn’t have something in the CI/CD pipeline to warn of outdated dependencies like:

(or an enterprise tool like Snyk) If they didn’t, that’s just… criminal, and they are moving way too fast.

With the Redis vuln that brought them down, wow, that one just hurts.

One thing you learn when you’re part of security leadership is that many times, vulnerabilities that leak information or are exploited have less impact than you would assume on your business if they don’t cause downtime.

One of the examples I like to talk about is a conversation I had with Twitch’s security team after they lost the source code for their application during a breach last year. I sat down with them after a conference talk they gave and asked them: How much monetary damage was associated with losing all of their source code like that? The rest of the conversation enlightened me.

Other than a very short blip in their stock price, a data leak, even of that magnitude, did not affect them. Nor did they assume it would affect them at any time in the near future. Many times source code for applications being forfeited is only important if you’re easily disruptable in the first place. The general consensus was that, even though the source code was out there, no incumbents could operate it. Now, obviously, this has a lot of caveats depending on the breach and depending on the source code or data.

What has absolutely no caveat is when your org is impacted by downtime. Downtime is the absolute worst enemy of a company like OpenAI and many SaaS product platforms. The conversation with Twitch made me rethink the whole way I prioritized testing and security strategy at a high level.

With all this, I expect OpenAI to probably start up a bug bounty with the same type of crazy payout schedule as some of the web3 companies soon. If I were an enterprising bounty hunter, I would start looking for bugs now and wait until they launch the program and submit some stuff 😉 

So… leverage or fix internal CI/CD security tooling and do layered testing (threat modeling, prod sec, pentest, red teaming, bug bounty)

🟦 The Blue Pill

Secrets Management & BadSecrets:

Hardcoded secrets are everywhere. It is such a hard and bad problem in the industry that even GitHub, which provides built-in tools, succumbed to it this week:

Not only are secrets hardcoded everywhere but they are often used in breaches as escalation points after initial entry.

For your development/engineering org, you want to minimize hardcoded secrets. A mature origination has a whole program around secrets management which is usually comprised of

  • Detection

  • Prevention

  • Response

  • and Education

sample SM program

Addressing hard-coded secrets in the “detect” phase can be done by hooking up certain tools like TruffleHog and GitSecrets to your automated process in your CI/CD platform. A new tool on the block, BadSecrets, can help you identify copied code from documentation that might contain weak or default secrets. A slightly different approach to the norms.


None yet! 😁 

Table Tops! 

Matt has been a long-time friend of mine, and he’s recently started to get back into Twitter and Content. I enjoy conversing with Matt because he has the same outlook as I do on a lot of security strategies. We both believe that different organizations benefit from different types of security programs. He’s also been a hacker and a CISO, which I have mad respect for.

Anyways, he’s been doing Twitter tabletop threads every once in a while on his Twitter. I find these threads super useful.

In this one, he reviews a scenario where you have verified that at least a few users have been phished and two-factor authentication has failed.

Here is my general game plan when things like this happen.

Tabletops are a valuable tool for the technical teams and the executive teams to prepare for a breach. It’s not “if” but “when” you’ll encounter a security incident, and having that muscle trained is infinitely worth it to your security program. If you can, try and get everyone involved.

🟥 The Red Pill

External vs Assumed Breach

This was a lively discussion on skipping external technical and phishing security testing in offensive security engagements. Nowadays, internals have been rebranded as “assume breach” services. They have more of a MITRE ATT&CK flavor and use a lot of fancy AD pivoting and magic. This thread's real summary is that ALL testing types are important. Yes, an assumed breach test saves time, but external tests give valuable insights into misconfigurations, password usage, cursory web vulnerabilities, and much more.

Facing an extremely crunched budget during the recession? Try to talk to your testing provider to bundle services. Remember that 3-year lock-in contracts can get you big discounts, and so can co-marketing with a vendor.

You can also take the approach of paying for an external from a reputable company and using Atomic Red (and Purple) Team tests to help you identify internal risk:

Ristbs’s Pocket Guide to OPSEC in Adversary Emulation

A great, maybe tragically named, resource for red teams.

RistBs outlines how to pinpoint and profile organizations' defenses with tips and tricks on evasion and common detections.

For Red Teamer’s it’s got some solid tips.

For defenders, if reversed, it is contextually every way to catch a red team or an adversary who uses common tools. It also exposes some vendor flaws!

👋 Outro

Thanks for reading. I sincerely appreciate it. If you enjoyed the newsletter, please share the signup page on Twitter or with friends!

If you are a red teamer or bug hunter, consider checking out my training in July:

And finally, if you are anywhere near Florida, come hang with me and several of my hacker buddies at HackSpaceCon!

Ty to my best bro Dan Miessler for encouraging me to create the newsletter.

It’s a blast. 🚀 

📖 More Reading

I also have several newsletters which inspire me, and I suggest them wholeheartedly to security folk. Check them out: