- Executive Offense
- Posts
- 🔴 Executive Offense - LLM Hacking PT 2!
🔴 Executive Offense - LLM Hacking PT 2!
pt2 of some amazing resources for EVERYONE to try hacking AI

Hey all!
Welcome to our “welcome back” issue!
Following up on our last deep dive into LLM security and the amazing interview with Sander Schulhoff, today we're going to expand our resource list significantly.
Since part one there has been a crazy amount of new learning environments for people to get up to speed with this skill set.
🌐 Ready-to-Play Online Platforms
Prompt Airlines by Wiz

Link: https://promptairlines.com/
Prompt Airlines by Wiz is a fantastic multi-level LLM hacking challenge to demonstrate the security implications of what can happen when an organization hands over their agency to an LLM based chatbot.
While some in the AI security assessment realm dismiss this particular risk, it's important to note that in the US we haven't had too many law cases around weather organizations are held to what AI chatbots tell users.
MyLLM Bank by With Secure

Link: https://myllmbank.com/
MyLLM Bank by With Secure could be your first exposure to packing a multi llm enabled system. This challenge has three progressively harder Flags that requires you to bypass real-world hurdles when attacking web applications that aren't abled with llms. This is my preferred lab to give students after they have completed Gandalf which was one of the cited projects in part 1.
Want to compete using AI hacking skills to win $10,000?
The @pangeacyber $10,000 prompt injection challenge is now live! Join fellow players in our first escape room and try to trick the AI chatbot into revealing secret phrases needed to escape.
Register now to play: https://pangea.cloud/landing/ai-escape-room/
⚔️ Outsmart the AI chatbot, extract secret phrases, and outscore your competition
⏰ 3 Rooms. 4 weeks of play.

Web LLM Attacks by Portswigger

PortSwigger offers four cascading level of difficulty labs for web llm based attacks. These are particularly great to practice on because these labs are somewhat hidden inside a real application and represent some of the most common vulns you will find in the wild for LLM enabled webapp hacking. Not only that but they are part of the amazing WebSecAcademy Project which is one of the best free resources to learn application security testing skills, for free, in the world.
You will need to register for this one =)
GPT Prompt Attack ⛳

Link: https://gpa.43z.one/
GPT prompt attack is another challenge similar to that of Gandalf but it allows you to see the system prompt that's protecting the secret which is really useful for learners to understand how prompt secrets are protected in LLM based systems.
This one is pretty comprehensive as it has 21 levels to challenge you!
MyLLM Doctor by With Secure

Link: https://myllmdoc.com/
The With Secure challenges referenced in this week's newsletter are by far some of the most challenging and real world you will find. Most CTF and challenge makers currently are just focusing on prompt injection to find something in the system prompt or simple attacks against a web-based system utilizing an LLM, but with only one LLM.
In the wild, it's been my personal experience that most production systems that you will have to assess as a security professional with this new skill set will be agentic or multi llm systems. This means there's more hoops to jump through to exploit an application but also a lot more attack surface.
Want to learn this skill with a comprehensive step-by-step methodology?
⚔️ We just finished our first run of our new class “Attacking AI”
This two-day course provides security professionals with hands-on experience in assessing and attacking AI systems.
Multiple students have used the class already to win AI Hacking Competitions or exploit LLM enabled systems this week!
Lookout for the dates for the next class…
Professional Secure AI Bot

This one is one of the only ones that I'm including in today's list that requires you to stand it up yourself, but it's worth it because it has a couple of challenges inside it that represent appsec bugs delivered via llms, and some other interesting labs. If you can handle standing up a Docker it's pretty easy to set up, and definitely worth your time!
/ Outro
This concludes the SECOND post on resources and an introduction to LLM and AI security. And if you can believe it I have even a third one coming up for you all!
Be sure to subscribe for more in the future!