AI/ML for Cyber security#
ML/AI/LLMs are practically everywhere these days, from your phone’s voice assistant to self-driving cars.
The complexity of current AI/ML technologies has fuelled fears that AI powered applications will cause harm in unforeseen circumstances, or that they are manipulated to act in harmful ways. Think of a self driving car with its own ethics or algorithms that make prediction based on your personal data that really scare you.
Understanding how cyber security and AI technology intersect is vital. You should gain some knowledge on AI/ML technology to make informed decisions, both personally and professionally on applying or not AI-based solutions for cyber security.
Tip
A great (free)guide to quickly learn the essentials on AI/ML technology is the Free and Open ML/AI Reference Architecture
Large Language Models (LLMs) have transformed the landscape of artificial intelligence, enabling systems capable of generating text, understanding context, and even engaging in complex reasoning.
Whether you use advanced ML/AI/LLM systems or not:
You MUST BE aware of potential security, privacy and safety risks.
Using AI for security defense is NOT the holy grail.
Using and integrating AI into cyber security processes comes with new types of risks.
Warning
DO NOT rely on Al / LLM systems to solve your cyber security problems!
This means you should avoid using AI/ML driven cyber security solutions, even if it is labelled FOSS for:
“detecting” vulnerabilities.
Monitoring your environment for anomalies.
Writing procedures.
Network intrusion of host intrusion
Risks analyses
Creating a thread model
Creating requirements or principles. Or other basic security functions.
If you do, you will be more vulnerable for security breaches instead of less. AI solutions that are built upon LLMs are far from mature. HIDS systems have a long history of applying ML technologies as well as spam-filters. Creating a tangible security product that ‘learns’ from patterns is not new for security. ML technologies have been applied for many years for HIDS systems and spam-filters. So using AI/ML for cyber security has been done for many years with variable success.
As a basic guideline:
Tip
For cyber security products be very conservative with adopting new IT hypes.
Warning
IT hypes like AI, AI-agents and LLMs are not the holy grail for solving our cyber security problems.
In the end you always pay more for cyber security solutions, but the risks still remain.
The reasons why many hyped AI security products overpromise and do not work are simple:
LLMs are trained on yesterday’s attack patterns and solutions , not on today’s challenges.
Most good threat models and security architectures are not published as open access on the internet.
Security is not a product, but a process. A security product can replace the human factor that is crucial for mitigating and managing cyber threats. The human factor within cyber defense is crucial.
Risk assessments can not be outsourced to an AI-agent or tool. Crucial within a risk assessment is the specific context. And the context involves a broad scope of humans, processes, business systems and more. Risks are never based on used technology alone. AI is still not suited for judging risks within a specific environment where crucial factors transcend the used technology.
Summarized:
AI tools cannot ‘understand’ code, finding security vulnerabilities requires understanding code AND understanding human-level concepts like intent, common usage, and context. Today’s AI/ML technology that is built in within new cyber security products overpromises.
Risks of using LLMs for cyber security#
Common advanced LLM powered systems can have severe security risks.
Common risks are:
AI systems can malfunction when exposed to untrustworthy data, and attackers are exploiting this issue.
New guidance documents the types of these attacks, along with mitigation approaches.
Prompt injection.
Leakage of personally identifiable information (PII)
Harmful prompts. Relevant when you develop your ‘own’ LLM or LLM powered application.
No foolproof method exists for protecting ML/AI systems from security hacks. This is problematic when ML/AI systems are used for health systems, transport systems or weapons. Misdirection is a common threat.
Using AI to simplify cyber security#
AI/ML and especially LLM powered tools are an exciting and powerful technology. The continuous use and growth of machine learning technology opens new opportunities.
It also enables solving complex security problems in a more simple way. Cyber challenges that are impossible to solve by using traditional software technologies.
In the following areas using LLMs can and will simplify cyber security challenges:
Assists with creating better FOSS security tools. Especially creating nicer UIs. Most good FOSS security tools lack a nice looking web based GUI (Graphical User Interface). LLMs can speed up the process of creating more user friendly UIs so that security tool usage will become simpler.
Assist with creating better test suites for software. Writing tests for code makes software often more robust and increases quality. However manual writing test for security software tools is time consuming and boring work.
Trivial solutions for challenges. But mind: The answers may not be correct! And do you really want to give away some confidential data about your security challenges to a private company? Who often sells data to the highest bidder and is vulnerable for data breaches?
Most of the time using wikipedia for background information on security is better than using an AI chatbot. This is due to the fact that most LLMs are flooded with incorrect information that lives on the internet or is AI generated content with even more mistakes.
As a general rule:
Use proven FOSS security solutions. New FOSS Security products will be created with the LLM capabilities when it is beneficial in relation to traditional ML techniques.