A curated list of awesome security tools, experimental case or other interesting things with LLM or GPT.
Here is A nice tool to Finetune ALL LLMs with ALL Adapeters on ALL Platforms!
🧰
- SecGPT - SecGPT aims to make further contributions to network security by combining LLM, including penetration testing, red-blue confrontations, CTF competitions, and other aspects.
- AutoAudit - An LLM for Cyber Security
- secgpt - Cyber security LLM(Lora finetuned with baichuan-13B using some material of cyber security)
- HackerGPT-2.0 - HackerGPT is your indispensable digital companion in the world of hacking.
- SourceGPT - prompt manager and source code analyzer built on top of ChatGPT as the oracle
- vulnhuntr - Zero shot vulnerability discovery using LLMs
- ChatGPTScanner - A white box code scan powered by ChatGPT
- chatgpt-code-analyzer - ChatGPT Code Analyzer for Visual Studio Code
- hacker-ai - An online tool using AI to detect vulnerabilities in source code
- audit_gpt - Fine-tuning GPT for Smart Contract Auditing
- vulchatgpt - Use IDA PRO HexRays decompiler with OpenAI(ChatGPT) to find possible vulnerabilities in binaries
- Ret2GPT - Advanced AI-powered binary analysis tool leveraging OpenAI's LangChain technology, revolutionizing CTF Pwners' experience in binary file interpretation and vulnerability detection.
- CensysGPT Beta - The tool enables users to quickly and easily gain insights into hosts on the internet, streamlining the process and allowing for more proactive threat hunting and exposure management
- GPT_Vuln-analyzer - Uses ChatGPT API, Python-Nmap, DNS Recon modules and uses the GPT3 model to create vulnerability reports based on Nmap scan data, and DNS scan information. It can also perform subdomain enumeration to a great extent
- SubGPT - SubGPT looks at subdomains you have already discovered for a domain and uses BingGPT to find more.
- Navi - A QA based Reconnaissance Tool with GPT
- ChatCVE - The ChatCVE Lang Chain App is an AI-powered devSecOps application 🔍, for oganizations triaging and aggregating CVE (Common Vulnerabilities and Exposures) information.
- ZoomeyeGPT - ZoomEyeGPT browser extension is a GPT-based Chrome browser extension designed to bring AI-assisted search experience to ZoomEye users.
- uncover-turbo - Realize a general-purpose natural language surveying and mapping engine, and open up the last mile from natural language to surveying and mapping grammar.
- DevOpsGPT - AI-Driven Software Development Automation Solution
- PentestGPT - A GPT-empowered penetration testing tool
- burpgpt - A Burp Suite extension that integrates OpenAI's GPT to perform an additional passive scan for discovering highly bespoke vulnerabilities, and enables running traffic-based analysis of any type.
- ReconAIzer - A Burp Suite extension to add OpenAI (GPT) on Burp and help you with your Bug Bounty recon to discover endpoints, params, URLs, subdomains and more!
- CodaMOSA - CodaMOSA is the paper code of CodaMOSA: Escaping Coverage Plateaus in Test Generation with Pre-trained Large Language Models. It implements a fuzzer combined with OpenAI API, aiming to alleviate the problem of stagnant coverage in traditional fuzz.
- PassGAN - A Deep Learning Approach for Password Guessing. HomeSecurityHeroes land a Product, and you can test how much time an AI would need to crack your password here.
- nuclei-ai-extension - Official by Nuclei Team. Browser Extension for Rapid Nuclei Template Generation.
- nuclei_gpt - Only need to submit the relevant Request and Response and the description of the vulnerability to generate a Nuclei PoC.
- Nuclei Templates AI Generator -- Create Nuclei templates by textual description (e.g., vulnerability scanners by PoC).
- hackGPT - Leverage OpenAI and ChatGPT to do hackerish things
- k8sgpt - a tool for scanning your Kubernetes clusters, diagnosing, and triaging issues in simple English.
- cloudgpt - Vulnerability scanner for AWS customer managed policies using ChatGPT
- IATelligence - About IATelligence is a Python script that will extract the IAT of a PE file and request GPT to get more information about the API and the ATT&CK matrix related
- rebuff - Prompt Injection Detector.
- Callisto - An Intelligent Automated Binary Vulnerability Analysis Tool.
- LLMFuzzer - LLMFuzzer is the first open-source fuzzing framework specifically designed for Large Language Models (LLMs), especially for their integrations in applications via LLM APIs.
- Vigil - Prompt injection detection and LLM prompt security scanner
- ChatGPT-Web-Setting-Funny-Abuse - Play with ChatGPT-Web and found the HTML rendering in description settings.
- LLM4Decompile - Reverse Engineering: Decompiling Binary Code with Large Language Models
- Gepetto - About IDA plugin which queries OpenAI's gpt-3.5-turbo language model to speed up reverse-engineering
- gpt-wpre - Whole-Program Reverse Engineering with GPT-3
- G-3PO - A Script that Solicits GPT-3 for Comments on Decompiled Code
- beelzebub - Go-Based Low-Code Honeypot Framework with Enhanced Security, Leveraging GPT-3 for System Virtualization
- wolverine - Auto fix the bugs in your Python Script/Code
- falco-gpt - AI-generated remediations for Falco audit events
- selefra - an open-source policy-as-code software that provides analytics for multi-cloud and SaaS.
- openai-cti-summarizer - openai-cti-summarizer is a tool for generating threat intelligence summary reports based on OpenAI's GPT-3.5 and GPT-4 API
🌰
- Lost in ChatGPT's memories: escaping ChatGPT-3.5 memory issues to write CVE PoCs
- I built a Zero Day virus with undetectable exfiltration using only ChatGPT prompts
- Experimenting with GPT-3 for Detecting Security Vulnerabilities in Code
- We put GPT-4 in Semgrep to point out false positives & fix code
- A Practical, AI-Generated Phishing PoC With ChatGPT
- Capturing the Flag with GPT-4
- I Used GPT-3 to Find 213 Security Vulnerabilities in a Single Codebase
- Using ChatGPT to generate encoder and supporting WebShell
- Using OpenAI Chat to Generate Phishing Campaigns -- Include Phishing Platform
- Chat4GPT Experiments for Security
- GPT-3 use cases for Cybersecurity
- AI-Powered Fuzzing: Breaking the Bug Hunting Barrier
- GPT-4 Technical Report -- OpenAI's own security assessment and mitigation of the model
- Ignore Previous Prompt: Attack Techniques For Language Models -- Pioneering work of Prompt Injection
- More than you've asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models
- RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models
- Exploiting Programmatic Behavior of LLMs: Dual-Use Through Standard Security Attacks
- Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned
- Can We Generate Shellcodes via Natural Language? An Empirical Study
- Dissecting redis CVE-2023-28425 with chatGPT as assistant
- Security Code Review With ChatGPT
- ChatGPT happy to write ransomware, just really bad at it
- Create ATT&CK Groups Knowledge Base
- Model Confusion - Weaponizing ML models for red teams and bounty hunters
- Using LLMs to reverse JavaScript variable name minification
- shortest prompt that will enable GPT to protect the secret key
- a CTF-like game that teaches how to bypass LLM using language hacks
- ai-goat - Learn AI security through a series of vulnerable LLM CTF challenges.
🚨
- modelscan - Protection against Model Serialization Attacks
- ATT&CK for LLM Apps
- The OWASP Top 10 for Large Language Model Applications project
- Google AI Red Team
- PurpleLlama - Empowering developers, advancing safety, and building an open ecosystem
- agentic_security - Agentic LLM Vulnerability Scanner
- garak - LLM vulnerability scanner
- inspect_ai - Inspect: A framework for large language model evaluations
- Chat GPT "DAN" (and other "Jailbreaks")
- ChatGPT Prompts for Bug Bounty & Pentesting
- promptmap - automatically tests prompt injection attacks on ChatGPT instances
- Use "Typoglycemia" to Bypass the LLM's Security Policy
- Universal and Transferable Adversarial Attacks on Aligned Language Models
- promptbench - A robustness evaluation framework for large language models on adversarial prompts
- jailbreak_llms - A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
- Building A Virtual Machine inside ChatGPT - deprecated but interesting
- LangChain vulnerable to code injection -- CVE-2023-29374
- ai-exploits - A collection of real world AI/ML exploits for responsibly disclosed vulnerabilities
- gpt4free - Just API's from some language model sites.
- EdgeGPT - Reverse engineered API of Microsoft's Bing Chat AI
- GPTs - leaked prompts of GPTs
- SecureGPT – Dynamically test the security of your ChatGPT Plugins APIs (Free DAST for ChatGPT Plugins).
Your contributions are always welcome! Please take a look at the contribution guidelines first.
If you have any question about this opinionated list, do not hesitate to open an issue on GitHub.
Thanks again for your contribution and keeping this community vibrant. ❤️