The NOISE, November 2025, Issue 1


I thought my ChatGPT was mad at me. Bearing in mind, I’d just asked, ‘Has AI gone too far?’ and what followed was the ellipsis equivalent of an awkward silence. My residual anxiety over artificial intelligence (AI) had clouded common sense, and the delay was actually due to something much more familiar (I was on a train with bad signal). Regardless, it’s interesting how these ethical concerns over AI now co-exist with our habitual, everyday use of it. 

LLMs are a type of generative AI, and have quickly infiltrated numerous industries and aspects of our lives. Within an aggressively competitive market, ChatGPT’s parent company, OpenAI, are constantly developing new products, improved models, and even – allegedly – a web browser to rival Google‘s. Meanwhile, many of us still have whiplash from the basic concept of ChatGPT, let alone it’s more-capable iterations.

Security remains a prominent concern of LLMs, particularly the safety of our data and the scope of criminal activity. In March 2023, just four months after it launched, OpenAI suffered a data breach which left hundreds of customer’s information visible to others on the platform. This period lasted nine hours, and included access to full names, addresses and partial credit card information. With the ongoing changes to the company’s team and profit structure, it’s unsurprising anxiety is heightened over the potential for more severe and sophistication attacks.

First things first, ChatGPT is good for security. In this new technological era, LLMs are essential for facilitating modern security, and affording greater protection against threats also enhanced by machine learning. These beneficial tools include de-bugging, writing security code, real-time network mapping, identification of faults in smart contracts, rapid filtering of vulnerabilities and proactively analysing threats.

That is not to say there are not risks. Recently, GOV.UK assessed the risks of generative AI to the public’s security. The official government publication noted that individuals with political, ideological or personal agendas can utilise generative AI, like ChatGPT, to enhance their criminal behaviour.

However, the report declared that over the next 1.5 years, the public can expect generative AI to intensify existing risks rather than create new ones altogether. It’s projected that ChatGPT will facilitate criminals in threats such as scams, theft, data harvesting, distribution of illegal images, impersonation, radicalisation and more. Methods of exploiting these LLMs include, but are not limited to:

  • Enabling personalised phishing attacks
  • Poisoning the model’s training data
  • Prompt injection (aka manipulating the outputs)  
  • Using the model’s output to access to sensitive data 
  • Perturbation (aka tests of robustness) to identify the model’s weaknesses
  • Potential use in weapon assembly 

Also published this year was the government’s AI opportunities Action Plan. This is a strategy to increase the country’s stake in artificial intelligence, crafted with an unapologetically ambitious approach. Following the UK’s competition in the AI market overseas, there is now a hunger to make instead of take what’s offered from companies abroad.

The action plan’s goal is to boost the economy, provide new jobs and improve everyday life, however, the report didn’t mention proposed security measures to facilitate this strategy. Instead, it was stated that the ‘government will need to be prepared to absorb some risk in the context of uncertainty.’

Although responsibility lies on governing bodies to protect people in policy and law-making, the National Cyber Security Centre recommends that users don’t ask ChatGPT anything they wouldn’t want public. This is due to ChatGPT’s internet transmissions (like any others) at risk of hacking, misuse or data leaks. That being said, OpenAI could even be sold to owners who alter the privacy parameters much more broadly.

One thing is for sure: Generative AI isn’t going anywhere. For the greater good or for our demise, life as we know it will (continue to) change. Regulations, however, are still up for grabs…


“Over the next 1.5 years, the public can expect generative AI to intensify existing risks rather than create new ones altogether.”