Unlocking the patterns: using our data while keeping our privacy intact
The IMPACT, November 2025, Issue 1
Is it just me or did Autumn arrive in a hurry this year? As I sit in my favourite café, rain trickles down the windowpane and temperatures teeter closer to single figures, yet the door is propped open in a sort of defiance to let go.
You might be wondering, ‘Gosh, I totally agree but what does talking about the weather have to do with the impact of ChatGPT?‘. Well, like any British person worth their salt, I know it’s only polite to open a conversation talking about the weather, but as I sit down to ruminate upon this topic this feels especially apt. Because, well, everything is moving so quickly, isn’t it?
This time last year, ‘How do we stop students using GenAI?’ was the hot topic on every academic’s lips, and already that pursuit came and went with the seasons. By now, the world’s most popular chatbot has become an inevitability, an expected team member on every group project, staff meeting, and late night study session. You can find ChatGPT, Gemini, Copilot, and Claude thumbing through reading lists and proof-reading papers. Their fingerprints can be found virtually everywhere.
The dust has not yet settled from the impact of large language models (LLMs), and we have now reached the point where an idea stops being speculative and has started altering behaviour, structures, and ethics. So what is gained and what is lost? Which voices are heard, and which are most affected? How will our lives continue to change?
Ethical AI at Manchester
Manchester is keen not only to utilise AI, but it’s building the protections to evolve AI in an ethical way. The University’s Centre for Digital Trust and Society, now recognised as an Academic Centre of Excellence in Cyber Security Research, brings together computer scientists, criminologists and ethicists to ask what it really takes to keep digital systems resilient. Going far beyond firewalls and encryption, and looking at how humans, institutions, and technologies interact. Ultimately asking, how do we build security that people can genuinely trust?
Alongside this, the Centre for AI Fundamentals is working to ensure the very foundations of machine learning are airtight. How are models learning? Where they might leak? How do biases creep in? How can we design algorithms that can be explained, audited and kept in check?
And in projects like TREvolution – a £6.2 million programme to develop Trusted Research Environments – Manchester is shaping how sensitive health and population data can be studied without exposing individuals and their information. Safe spaces like these are right at the forefront of innovation, enabling researchers to unlock patterns in society’s most valuable data whilst keeping our privacy intact.
These are the practices and architectures that say, ‘yes, we will innovate, but not at the cost of genuine trust.’ If momentum is an inevitability, then the responsibility to protect one another is paramount.
Inclusive and accessible tutoring
As a leading institution in AI, Manchester moved quickly to publish guidance on student use of generative tools in assessments. The aim: accept that students will use them, but steer them towards critical thinking, reflection, and the ability to develop their own arguments.
Staff too are discovering small productivity wins through Copilot – meeting notes, quick reports and other routine tasks being compressed into minutes rather than hours.
And this technology’s potential reaches further still, with new research suggesting that tools which proofread, summarise, and shape writing could be especially transformative for neurodivergent students. Where in the past, educators may have been short on specialist training or resource, LLMs could one day help spot early signs of learning needs. However, only if safeguards around privacy and bias are carefully designed.
Breakthrough applications at Manchester span into Intelligent Tutoring Systems research, where models like GPT-4 can be leveraged for ‘Socratic-style interactions and personalised feedback’. The ‘Socratic Playground’ tutoring pilot was tested with 30 undergraduate students working on foundational English skills, and used adaptive prompts to help them master vocabulary, grammar and sentence structure. The results showed measurable progress, and, just as importantly, higher satisfaction with learning.
From clinics to compassion
From diagnosis to the clinic, LLMs are stepping into increasingly diverse arenas. Early research shows that while medical notes by AI may omit key details or demand higher reading levels, the writing itself shows potentially more empathetic and compassionate readings from AI than their human counterparts.
Perhaps this isn’t so surprising, however, as just this year the topic of AI Therapy became a viral discussion across both TikTok and Reddit – with creators claiming ChatGPT helped them “more than 15 years of therapy.”
“If I look happier, it’s because ChatGPT is my therapist...”
With 800 million users in 2025, over 10% of the world’s population, ChatGPT reports 70% of platform activity actually extends to our personal lives – showing these tools are shaping our emotional vocabulary as much as our work productivity. The challenge, and Manchester’s focus, is ensuring any gains in compassion never come at the cost of patient safety.
The future of productivity
In offices, labs and lecture theatres, generative AI has swiftly taken workspaces by a storm. Copilot drafts reports, summarises meetings and collates research notes. Marketing teams lean on LLMs for audience research, administrators automate paperwork, and academics use them to sketch out funding proposals before refining by hand.
Used thoughtfully, these systems lift repetitive chores off people’s plates, leaving more room for strategy and creativity. But when “time saved” simply becomes “time reassigned,” workloads can quietly creep upward. And in workplaces handling sensitive data, trust and security must be baked in, otherwise the productivity gains won’t stick.
The (not-so) hidden footprint
All this activity doesn’t come free of charge to the planet. Training and running LLMs demands hefty amounts of electricity and cooling water in data centres. One recent estimate suggested a single high-end model can consume as much power in a year as several thousand homes.
So, if we can teach AI to write more empathetically, can we teach it to tread more lightly too? Manchester researchers are already exploring greener computing, while across the sector experiments with renewable contracts, efficient chips, and model ‘distillation’ aims to curb emissions. Environmental responsibility, like data security, is part of making AI safe enough to scale.
Work, ethics, and creativity
Overall, are we really more productive, happier and smarter because of GenAI? Are we simply busier in new ways? Are we engaging with our work as we should be?
With all of this progress comes harder questions. For some roles, AI will augment what we do and for others it very well may displace our work altogether. Universities and employers will need to prepare staff, balance workloads and embed ethics so efficiency doesn’t come at a human cost.
There’s also the quieter question of what it means to think for ourselves. Writing, once the slow craft of wrestling with an idea, can now be co-authored with a machine. Digital art can be whipped up in the style of a living painter from a single prompt. That doesn’t erase the need for curiosity or invention, if anything, it makes critical thinking, research, and creativity all the more precious, something to safeguard from the floodwaters of convenience.
Businesses and institutions will need to protect the people whose work underpins their success. As individuals, we also have to decide when to lean on the hive mind these systems offer, and when to meet the blank page on our own.
Like autumn’s quick arrival, generative AI has swept in with a momentum that feels irreversible. It is helping students learn, doctors listen, and office workers breathe easier — yet it also asks us to re-think what effort, creativity and even empathy mean. It’s impact will depend not only on how boldly we use it, but on how carefully we protect against the risks it carries. From data and dignity, to the integrity of assessments and the safety of patients. As Alan Turing reminded us on the topic 75 years ago: “We can only see a short distance ahead, but we can see plenty there that needs to be done.”
“LLMs could be trained to identify early signs of learning disabilities – enabling educators to create personalised teaching strategies, approaches, and resources.”

