How is Generative AI Impacting Cyber Security?
Read time 5 mins
There’s no question that artificial intelligence – a one-time futuristic notion – is developing rapidly. From smart home devices like Amazon’s Alexa to deep fakes that challenge our concept of reality and OpenAI’s ChatGPT chatbot. AI is a fascinating and fast-developing technology that we’re all surrounded by in one form or another.
Generative artificial intelligence (such as ChatGPT or the currently in testing M365 Copilot) is a type of AI technology that can produce various types of content, including text, imagery, audio and synthetic data.
Generative AI falls under the broad category of machine learning since it’s powered by large machine learning models pre-trained on vast amounts of data, commonly called foundation models (FMs). For example, a common FM subset would be large language models (LLMs), natural language processing computer programs that use AI to generate text. LLMs are trained on billions of words and can speak seemingly ‘naturally’ to users when prompted with a question or instruction.
What does generative AI have to do with cyber security?
In the world of cyber security, AI is creating just as much of a buzz as it is everywhere else. Indeed, the RSA Conference included several perspectives on the topic, including talks by government officials from the U.S. Cyber security and Infrastructure Security Agency (CISA), National Security Agency (NSA), and the National Aeronautics and Space Administration (NASA).
Opinions at RSA were somewhat divided, with some cyber security vendors maintaining that AI and machine learning had been used broadly in the industry for years and its place within the enterprise is already accepted. On the other hand, some argued that the popularity of AI right now, particularly LLMs, is more a result of marketing than actual technology advancements.
Still, with generative AI’s use on the rise, there’s no denying that there are multiple considerations about what this could mean for cyber security – some with more implications than others. This makes it a particularly murky area to explore, especially for those not trained in cyber security practices.
For example, TechTarget recently asked ChatGPT how cyber security professionals used the platform. ChatGPT replied with several examples including:
- Security policy and security awareness training documents: vulnerability assessments, including performing scans, interpreting reports and suggesting remediation.
- Threat hunting: which includes parsing through logs, identifying patterns and detecting indicators of compromise.
- Threat intelligence analysis: such as simplifying reports down to relevant data and quickly gathering insights from security advisories and online forums.
Interestingly, the chatbot also issued a disclaimer for itself, stating that “while ChatGPT can provide valuable assistance, cyber security professionals should exercise caution and apply their expertise”. This is a poignant reminder that, while generative AI is here to stay, it offers both risks and rewards to the cyber security community and is not a replacement for human knowledge.
What are the risks of generative AI in cyber security?
It’s worth noting – as others have – that generative AI still suffers from limitations, making it unsuitable for many common cyber security tasks.
For example, LLMs can be dangerous since they’re prone to confidently expressing false information garnered from incorrect sources, though of course some humans also suffer from this problem!. Additionally, while they’re a useful tool for summarising data and presenting it in a more convenient or user-friendly way, LLMs lack any understanding of context about what it has presented back. This poses a problem when designing cyber security policies and processes because the AI doesn’t know anything about how your organisation works or your secure practices. Also it can’t know the language of your organisation and how best to present information; it is genuinely robotic in its approach which could lead to a lack of engagement.
Artificial intelligence in cyber security is undoubtedly a double-sided coin, with each potential benefit also having its equal Achilles heel. For instance, AI can be used as a preventative measure in cyber security; it could, for example, share suggested fixes for security flaws as developers write code, leaving the tedious task of scanning and remediating flaws to the AI automation.
This sounds like great news – it reduces the need for developers to fix flaws manually in the software development lifecycle and leaves space for them to focus on more creative tasks.
On the other hand, however, AI models are only as good as the data that powers them and bad data = inaccurate results. In the above example, generative AI could unwittingly allow malicious code to morph and create an even greater threat that may evade detection by traditional cyber security defences.
Unfortunately, we must always consider that any benefits generative AI brings to cyber security professionals also benefit cyber criminals in the same ways. So, as AI helps organisations scale and advance their cyber security, so too does it help entry-level hackers gain new capabilities.
These days, cyber criminals are using deep fake software in ID theft scams, and it’s not unusual for chatbots or text-generating applications to be used to create phishing emails or malicious code.
Sure, these are similar cyber threats that we already know about, but there could be room to scale up operations using generative AI.
Finding the right balance
What will be important in the coming months and years is for security leaders to find the right balance between AI and human knowledge. It’s ok to trust AI with some things, but we must be cautious and never become too reliant on automation without reviewing the results.
The key, for now, is AI governance. Right now, there’s simply a lack of policies, policy enforcement, and monitoring on the subject, and CISOs and security practitioners need to work together to put appropriate safeguards in place. For instance, too often, we read that people have input sensitive information about their company or customers into an AI tool without checking whether it’s the right thing to do. The AI system can’t tell the user that, but it can learn from the input and could inadvertently learn something you don’t want it to. Remember, with no guarantee on how LLMs learn, we cannot be 100% certain we aren’t sharing trade secrets with the machine.
Additionally, to mitigate the risk of a data breach caused by malicious AI, it’s up to organisations to increase user awareness of this issue. Tasks cyber teams can do include running frequent security awareness training to ensure that threats stay fresh in employees’ minds and that best practices are reiterated.
Organisations may also run phishing simulation tests, giving employees first-hand experience of defanged attacks and rolling out additional training to anyone who falls for the phish.
The most important thing for end users to remember is to have a healthy dose of scepticism. As AI-generated attacks become more common, human behaviour will be essential to your business’s defence.
To learn more about how we can help your organisation mitigate cyber threats and maintain a high level of security from our in-house, UK-based cyber security operations centre, please get in touch using the green button on this page.