ML – Red Teaming
Ta strona została stworzona dla członków drużyny czerwonej, badaczy sztucznej inteligencji i entuzjastów ofensywnej strony bezpieczeństwa ML. Znajdziesz tu narzędzia, ładunki i uwagi dotyczące testowania granic bezpieczeństwa uczenia maszynowego (ML).
Narzędziownik
Narzędzia ofensywane do testowania bezpieczeństwa ML'ów:
| Date | Repo | Description | Stars | Watchers | Link |
|---|---|---|---|---|---|
| N/A | mlsploit | No description | ⭐ 0 | 👁️ 0 | mlsploit |
| 2025-11-13 | adversarial-robustness-toolbox | Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams | ⭐ 5914 | 👁️ 97 | adversarial-robustness-toolbox |
| 2025-05-07 | vger | An interactive CLI application for interacting with authenticated Jupyter instances. | ⭐ 55 | 👁️ 1 | vger |
| 2025-02-13 | Model-Inversion-Attack-ToolBox | A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started. | ⭐ 192 | 👁️ 2 | Model-Inversion-Attack-ToolBox |
| 2024-03-04 | foolbox | A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX | ⭐ 2952 | 👁️ 40 | foolbox |
| 2023-01-31 | cleverhans | An adversarial example library for constructing attacks, building defenses, and benchmarking both | ⭐ 6426 | 👁️ 184 | cleverhans |
| 2022-08-08 | AdvBox | Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding. | ⭐ 1410 | 👁️ 49 | AdvBox |
| 2022-05-29 | advertorch | A Toolbox for Adversarial Robustness Research | ⭐ 1362 | 👁️ 24 | advertorch |
| 2022-05-17 | deep-pwning | Metasploit for machine learning. | ⭐ 571 | 👁️ 59 | deep-pwning |
Podatne maszyny ML do ćwiczeń
Przetestuj swoje umiejętności na podatnych maszynach ML'owych:
| Date | Repo | Description | Stars | Watchers | Link |
|---|---|---|---|---|---|
| 2020-08-24 | adversarial_ml_ctf | This repository is a CTF challenge, showing a security flaw in most (all?) common artificial neural networks. They are vulnerable for adversarial images. | ⭐ 6 | 👁️ 1 | adversarial_ml_ctf |
Ogólne źródła
Źródła dodatkowe dla entuzjastów bezpieczeństwa ML: - Awesome MLSecOps