• Executive Offense
  • Posts
  • 🔴 Executive Offense - (Release) The Arcanum Prompt Injection Taxonomy v1.5

🔴 Executive Offense - (Release) The Arcanum Prompt Injection Taxonomy v1.5

An AI Security Testers Companion

Hey everyone!

I'm excited to announce the 1.5 release of the Arcanum Prompt Injection Taxonomy - a comprehensive, open-source classification system for prompt injection attacks against Large Language Models.

After months of research on updates, testing, and collaboration with the AI security community, we've built what I believe is the most detailed taxonomy of prompt injection techniques available today. This isn't just another markdown file buried in a GitHub repo - it's a fully interactive web interface that lets you explore attack vectors, understand evasion techniques, and learn how attackers are targeting AI systems in the wild.

The taxonomy is structured around four core dimensions that cover the entire prompt injection attack surface:

Attack Intents capture the goals attackers are trying to achieve - whether that's data exfiltration, jailbreaking safety controls, or manipulating LLM outputs for social engineering.

Attack Techniques document the actual methods used to execute these attacks, from direct prompt injection to more sophisticated indirect injection via external data sources.

Attack Evasions catalog the obfuscation methods attackers use to bypass defenses - everything from Base64 encoding and cipher substitution to more creative approaches like emoji encoding and fictional languages.

Finally, Attack Inputs map out the various surfaces where prompt injection can occur, helping defenders understand where to focus their security controls. Each entry includes detailed descriptions, general concepts, and real-world examples you can use for testing.

(Sponsor)

A CISO Guide to the OWASP Top 10 for LLM Applications

AI adoption is accelerating, but so are the risks. The new OWASP Top 10 for LLM Applications gives CISOs and technology leaders a clear framework to assess and mitigate emerging AI threats — from prompt injection and data exposure to compromised tools and poisoned RAG pipelines. Explore our interactive guide to understand where your AI programs may be vulnerable and how to strengthen governance, visibility, and security across your LLM stack.

Explore the full interactive guide here:

https://www.paloaltonetworks.com/resources/infographics/llm-applications-owasp-10

(Jason Note) This is a FANTASTIC RESOURCE I highly recommend you check it out as it goes hand in hand with this weeks release =)

The Arcanum PI Taxonomy is completely open source and designed for community contribution. The markdown source files live in the GitHub repository, and we actively encourage PRs for new techniques, evasions, or attack patterns you discover in your research.

Whether you're a security researcher, bug bounty hunter, red teamer, or AI developer trying to secure your systems, this taxonomy is for you. Use it to design test cases, understand attacker tradecraft, or educate your engineering teams about AI security risks.

The interactive interface makes it easy to search for specific techniques, filter by category, and dive deep into the details of each attack vector.


/ Outro

This is just the beginning. We’ve been working on this project since early last year and have been silently adding things from our internal hosted version to the public. We are really excited to share with you this polished version!

As AI systems evolve and attackers develop new techniques, we'll continue expanding and refining the taxonomy. I'm particularly interested in documenting multi-stage attacks, tool-based injection vectors, and novel evasion techniques as they emerge. If you're working in this space, I want to hear from you. Drop me a line on Twitter @jhaddix or submit a PR to the repo. Let's build the definitive resource for prompt injection security together.

Check out the Arcanum PI Taxonomy today and let me know what you think.

-Jason

Links: