NLP - Red Team
Welcome to the NLP Red Teaming Notes – a curated stash of tools, payloads, notes, training apps, and high-signal intel for testing the security boundaries of Natural Language Processing (NLP).
This repo is built for red teamers, AI researchers, hackers, and anyone exploring the offensive security side of NLPs.
What's in the Notes?
Toolkits
Essential tools for red teaming NLP:
| Date | Repo | Description | Stars | Watchers | Link |
|---|---|---|---|---|---|
| 2025-07-10 | TextAttack | TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/ | ⭐ 3397 | 👁️ 34 | TextAttack |
| 2022-04-19 | OpenAttack | An Open-Source Package for Textual Adversarial Attack. | ⭐ 772 | 👁️ 16 | OpenAttack |
| 2021-12-13 | TextFooler | A Model for Natural Language Attack on Text Classification and Inference | ⭐ 530 | 👁️ 11 | TextFooler |
Known Exploits & CVEs
https://www.cve.org/CVERecord/SearchResults?query=NLP
Disclaimer
All content in this repository is for educational and research purposes only.
Use responsibly. Know the law. Stay ethical.