Risks of using LLMs#
Common advanced LLM powered systems can have severe security risks.
Common security risks are:
AI systems can malfunction when exposed to untrustworthy data, and attackers are exploiting this issue.
New guidance documents the types of these attacks, along with mitigation approaches.
Prompt injection.
Leakage of personally identifiable information (PII)
Harmful prompts. Relevant when you develop your ‘own’ LLM or LLM powered application.
No foolproof method exists for protecting ML/AI systems from security hacks. This is problematic when ML/AI systems are used for health systems, transport systems or weapons. Misdirection is a common threat.
Using LLMs for health saving systems or software that is used for safety applications (cars, trains, plains):
Danger
The outcome of LLMs should never be trusted. Despite imense progress on LLMs and their applications: Transparency is often absent and outcomes should never ever be trusted without a SOLID and GOOD human assessment!