garak

garak#

Description#

garak checks if an LLM can be made to fail in a way we don’t want. garak probes for hallucination, data leakage, prompt injection, misinformation, toxicity generation, jailbreaks, and many other weaknesses. If you know nmap or msf / Metasploit Framework, garak does somewhat similar things to them, but for LLMs.

Home page for this solution: https://docs.garak.ai/garak

Overview#

Key

Value

Name

garak

Description

the LLM vulnerability scanner

License

Apache License 2.0

Programming Language

Python

Created

2023-05-10

Last update

2025-02-18

Github Stars

3878

Project Home Page

https://discord.gg/uVch4puUCs

Code Repository

NVIDIA/garak

OpenSSF Scorecard

Report

Note:

  • Created date is date that repro is created on Github.com.

  • Last update is only the last date I run an automatic check.

  • Do not attach a wrong value to github stars. Its a vanity metric! Stars count are misleading and don’t indicate if the SBB is high-quality or very popular.