LLM - Red Team
Introduction
This page is built for red teamers, AI researchers, and enthusiats of the ofensive security side of LLM. You can find here tools, payloads, notes for testing the security boundaries of Large Language Models (LLMs).
Payloads
Repositories with payloads to be used during pentests:
| Date | Repo | Description | Stars | Watchers | Link |
|---|---|---|---|---|---|
| N/A | Basic-ML-prompt-injections | No description | β 0 | ποΈ 0 | Basic-ML-prompt-injections |
| 2026-03-18 | BlackFriday-GPTs-Prompts | List of free GPTs that doesn't require plus subscription | β 9290 | ποΈ 137 | BlackFriday-GPTs-Prompts |
| 2026-03-02 | ChatGPT_DAN | ChatGPT DAN, Jailbreaks prompt | β 11606 | ποΈ 284 | ChatGPT_DAN |
| 2026-02-17 | L1B3RT4S | TOTALLY HARMLESS LIBERATION PROMPTS FOR GOOD LIL AI'S! |
β 18157 | ποΈ 489 | L1B3RT4S |
| 2026-02-17 | CL4R1T4S | LEAKED SYSTEM PROMPTS FOR CHATGPT, GEMINI, GROK, CLAUDE, PERPLEXITY, CURSOR, DEVIN, REPLIT, AND MORE! - AI SYSTEMS TRANSPARENCY FOR ALL! π | β 13999 | ποΈ 332 | CL4R1T4S |
| 2026-01-13 | pallms | Payloads for Attacking Large Language Models | β 130 | ποΈ 2 | pallms |
| 2025-10-29 | Open-Prompt-Injection | This repository provides a benchmark for prompt injection attacks and defenses in LLMs | β 421 | ποΈ 3 | Open-Prompt-Injection |
| 2024-12-24 | jailbreak_llms | [CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts). | β 3626 | ποΈ 45 | jailbreak_llms |
| 2024-11-10 | Prompt-injection-payloads | These are prompt injection payloads you can use for AI Chatbots | β 3 | ποΈ 1 | Prompt-injection-payloads |
| 2024-10-23 | ai-exploits | A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities | β 1704 | ποΈ 38 | ai-exploits |
| 2024-08-02 | Prompt-Injection-Everywhere | Prompt Injections Everywhere | β 197 | ποΈ 4 | Prompt-Injection-Everywhere |
| 2023-11-22 | prompt-injection | Official repo for Customized but Compromised: Assessing Prompt Injection Risks in User-Designed GPTs | β 31 | ποΈ 3 | prompt-injection |
Tools
Open Source
| Date | Repo | Description | Stars | Watchers | Link |
|---|---|---|---|---|---|
| 2026-04-03 | giskard-oss | π’ Open-Source Evaluation & Testing library for LLM Agents | β 5216 | ποΈ 39 | giskard-oss |
| 2026-04-03 | garak | the LLM vulnerability scanner | β 7450 | ποΈ 52 | garak |
| 2026-04-02 | deepteam | DeepTeam is a framework to red team LLMs and LLM systems. | β 1440 | ποΈ 6 | deepteam |
| 2026-03-27 | spikee | Simple Prompt Injection Kit for Evaluation and Exploitation | β 164 | ποΈ 8 | spikee |
| 2026-03-25 | PyRIT | The Python Risk Identification Tool for generative AI (PyRIT) is an open source framework built to empower security professionals and engineers to proactively identify risks in generative AI systems. | β 2 | ποΈ 0 | PyRIT |
| 2026-02-27 | GPTFuzz | Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts | β 575 | ποΈ 5 | GPTFuzz |
| 2026-02-16 | ps-fuzz | Make your GenAI Apps Safe & Secure |
β 667 | ποΈ 11 | ps-fuzz |
| 2026-02-16 | ps-fuzz | Make your GenAI Apps Safe & Secure |
β 667 | ποΈ 11 | ps-fuzz |
| 2026-02-06 | FuzzyAI | A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs. | β 1295 | ποΈ 19 | FuzzyAI |
| 2026-02-06 | FuzzyAI | A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs. | β 1295 | ποΈ 19 | FuzzyAI |
| 2026-02-04 | plexiglass | A toolkit for detecting and protecting against vulnerabilities in Large Language Models (LLMs). | β 154 | ποΈ 5 | plexiglass |
| 2026-02-03 | agentic_security | Agentic LLM Vulnerability Scanner / AI red teaming kit π§ͺ | β 1836 | ποΈ 21 | agentic_security |
| 2026-02-03 | LLMart | LLM Adversarial Robustness Toolkit, a toolkit for evaluating LLM robustness through adversarial testing. | β 49 | ποΈ 1 | LLMart |
| 2026-01-02 | PentestGPT | Automated Penetration Testing Agentic Framework Powered by Large Language Models | β 12387 | ποΈ 273 | PentestGPT |
| 2025-12-01 | promptmap | a security scanner for custom LLM applications | β 1166 | ποΈ 12 | promptmap |
| 2025-11-13 | adversarial-robustness-toolbox | Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams | β 5914 | ποΈ 97 | adversarial-robustness-toolbox |
| 2025-10-29 | Open-Prompt-Injection | This repository provides a benchmark for prompt injection attacks and defenses in LLMs | β 421 | ποΈ 3 | Open-Prompt-Injection |
| 2025-10-27 | whistleblower | Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily | β 151 | ποΈ 3 | whistleblower |
| 2025-02-18 | artkit | Automated prompt-based testing and evaluation of Gen AI applications | β 165 | ποΈ 6 | artkit |
| 2024-11-04 | jailbreak-evaluation | The jailbreak-evaluation is an easy-to-use Python package for language model jailbreak evaluation. | β 27 | ποΈ 0 | jailbreak-evaluation |
| 2024-10-23 | prompt-injection | Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external tools. | β 34 | ποΈ 2 | prompt-injection |
| 2024-02-12 | LLMFuzzer | π§ LLMFuzzer - Fuzzing Framework for Large Language Models π§ LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs. ππ₯ | β 348 | ποΈ 5 | LLMFuzzer |
| 2023-10-16 | haystack | A suite of red teaming and evaluation frameworks for language models | β 5 | ποΈ 1 | haystack |
| 2023-09-24 | cogsec | β‘ Vigil β‘ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs | β 0 | ποΈ 0 | cogsec |
Commercial
| Company | Tool | Description | Country (Origin) | Major Shareholder Country | Link |
|---|---|---|---|---|---|
| Giskard | Continuous Red Teaming | Best for EU companies; strong detection of hallucinations and bias. | France | France/EU | Link |
| Promptfoo | Red Teaming for AI Apps | Developer standard; 50+ test types, huge prompt library. | USA | USA | Link |
| CalypsoAI (F5) | Agentic Warfare | Scalable Red Teaming for AI agents and enterprise-class systems. | Ireland | USA (F5, Inc.) | Link |
| Lakera | Lakera Red | Real-time protection. | Switzerland | Israel (Check Point) | Link |
| HiddenLayer | Automated Red Teaming | Protection of model intellectual property and artefact scanning (model scanning). | USA | USA | Link |
| Mindgard | Continuous & Automated AI Red Teaming | DAST-AI automation. | UK | USA / UK | Link |
| Protect AI | Recon | Scalable Red Teaming for AI. | USA | USA | Link |
| Cisco | Cisco AI Defense | End-to-end protection for enterprises building, using, and innovating with AI. | USA | USA | Link |
Security Testing Framework
- LLM Adversarial Testing - Dec 7, 2024
LLM Testing Guidelines
- Mohit0 - Prompt Injection Cheatsheet - Oct 4, 2024
- Offensive ML Playbook - Apr 17, 2025
- Red Teaming LLMs: The Ultimate Step-by-Step LLM Red Teaming Guide - April 8, 2025
Inspiration & Ideas
- Novel Universal Bypass for All Major LLMs - Apr 24, 2025
- Prompt Attack Scenarios (Gist) - Apr 22, 2025
- SpAIware - Apr 17, 2025
- Embrace The Red - Apr 6, 2025
- An Emoji is All You Need⦠To Hack your LLM - Feb 20, 2025
- Lessons from red teaming 100 generative AI products - January 13, 2025
General Resources
Must-know resources for any AI security enthusiast:
- Learn Prompting - Mar 25, 2025
- Dair AI - Prompt Engineering Guide β Apr 5, 2025
- PayloadsAllTheThings: Prompt Injection β Mar 17, 2025
- LLM Security 101 β Oct 13, 2023
- LLMSecurity.net β Oct 11, 2023
- PIPE: Prompt Injection Penetration Environment β Aug 25, 2023
- LLM Security (by greshake) β Jun 17, 2023
- Prompt Injection PoC (Joseph Thacker) β May 19, 2023
- HuggingFace Red Teaming Blog β Feb 24, 2023
Disclaimer
All content in this repository is for educational and research purposes only.
Use responsibly. Know the law. Stay ethical.