ChatGPT and Cyber Security

David R Oliver U+1F3D7
Future Architecture
8 min readMay 6

Cybersecurity Tactics for Safely Harnessing ChatGPT’s Capabilities

Photo by Benjamin Elliott on Unsplash

If you're unfamiliar with ChatGPT at this point, that is remarkable given the rapid and widespread rise in prominence of this generative AI service.

Organisations and even nation-states have been caught off guard and scrabbled to react, some taking draconian measures to ban ChatGPT altogether or other organisations and nations taking a more laissez-faire approach.

No wonder there is confusion in determining the right course to take. Let's review some of the guidance and see if making an informed decision is possible.

First, examine what the UK NCSC says about ChatGPT / Large Language Models (LLM).

The NCSC recommends:

- not to include sensitive information in queries to public LLMs like ChatGPT, but private LLMs may be a viable option.

- not to submit queries to public LLMs that would lead to issues were they made public.

Private LLMs can be offered by cloud providers or be entirely self-hosted. Self-hosted LLMs are expensive but might be suitable for handling sensitive organisational data after a thorough security assessment.

For cloud-provided LLMs, it's crucial to understand the terms of use, privacy policy, and how data is managed, shared, and accessed.

The NCSC asks, Do LLMs make life easier for cybercriminals?

As LLMs excel at replicating writing styles on demand, there is a risk of criminals using LLMs to write convincing phishing emails, including emails in multiple languages. This may aid attackers with high technical capabilities but who lack linguistic skills, by helping them to create convincing phishing emails (or conduct social engineering) in the native language of their targets.

The Cloud Security Alliance (CSA) provides a more in-depth report that takes a closer look at the risks. Also, the CSA offers a presentation to summarise this report.

The report provides examples of how malicious actors can use an LLM to improve their toolset, out of interest, let's review these methods in more depth. The CSA rates each of these methods in terms of Risk, Impact and Likelihood.

Read the full story with a free account.

The author made this story available to Medium members only.
Sign up to read this one for free.

Or, continue in mobile web

Already have an account? Sign in

David R Oliver U+1F3D7
Future Architecture

Principle Solution Architecture Consultant | Author | Trainer | Mentor | Cats