The EXPERT, November 2025, Issue 1


Omar possesses that rationale so often associated with academics. In this case, it’s not pretentious, nor is it something he puts on – like the raincoat hanging on the back of his chair. Rather, it’s something that resides within his very temperament. He takes considered pauses to think. He orders an espresso. It soon becomes apparent, as he sips on his single shot of coffee, that Omar is not all that alarmed by ChatGPT.

The majority of us are not Senior Lecturers in Machine Learning like Dr Omar Rivasplata, and the launch of ChatGPT seemingly occurred overnight. This newfangled technology claimed human-like capabilities, posing more questions of uncertainty than answers for the everyday person.

But Omar worked at Google’s DeepMind for three years,
spending his last six months at the company working with a team trying to develop a conversational LLM just like ChatGPT. “The surprising part for me,” he laughs, casting his mind back to late 2022, “was that it came out of another company.”

The University’s Centre for AI Fundamentals brought Omar to Manchester in 2024. Born in Peru, he studied for his postgraduate in Canada before landing a research job at UCL. However, he saw potential in developing a career up North, and had an itch to work with leading figures like Samuel Kaski. Soon enough, a job opening came up and the rest (including his awful commute from Hampstead to London) is history.

Omar views the bigger picture of ChatGPT as a film we’ve all seen many times before. “Every now and then, there is some innovation that completely disrupts the way people do things. The invention of writing and paper. The mechanisation of agriculture. The introduction of computers in the office,” he explains. “Any product that has been developed with engineering and technology is only going to be improved.”

However, Omar is aware that ‘artificial intelligence’ could evoke fantastical images of human redundancy, computer consciousness or, at worse, robot uprising. So, he wants to set the record straight, “These products are optimised to be very good at one task. You change the task? The product is terrible. We don’t have one that is good across the board. I know how much effort goes into helping the product become good at one thing!”

Turns out, AI outputs don’t sing by themselves. “These products can’t do everything we can do,” Omar continues. “We have smartphones, right? Compared to previous generations, the smartphone was something that had many more capabilities. Yet, smartphones are not smart in the same way that we are. Not even smart in the same way as a dog.”

“It’s just in our imagination that we extrapolate all these things as becoming smart, competing with and eventually beating us. They don’t have anything that could be called self-consciousness.”

In my periphery, I spy the soggy mint leaves in my glass teapot, far-removed from the cup of honey-coloured tea beside it. We’re early into our conversation on ChatGPT, but as fact is filtered from fiction, the once murky waters are slowly becoming clearer.

Meanwhile, the issues of scams, phishing and fraud inevitably arise, but Omar seems reluctant to feed the salacious narrative adopted by tabloids online. “AI can be very effective in the hands of someone with destructive proposes, but this is no different to other technologies introduced ahead of regulation and law.”

He counteracts the exploitation of ChatGPT with AI-enhanced security measures. He argues that increased-competition among other LLMs is already producing tighter regulation.

“Collectively, we need to adapt the new technology and create safety measures, maybe involving key participants from across sectors of society,” he elaborates, citing those in policy making and e-commerce.

Interestingly, Omar places power back into the people’s hands. “Now, here is one thing about privacy: it’s something that you can only lose. It’s not something that you can improve.”

As the lunchtime rush crescendos in a clattering of cutlery and impassionaied small talk (about ChatGPT on the next table, would you believe), I press Omar to explain.

“You have your current status of privacy, and you can only make it worse. Information can only be leaked,” he says, earnestly. “The same as you would apply to interacting with the internet – opening online accounts, revealing information about yourself – those rules apply to interacting with ChatGPT.”

Later on, Omar borrows an analogy from Philosopher of Technology, Shannon Vallor. In essence, the principle is this: you look into a mirror and see a dirty face. Do you smash the mirror? Do you ban the use of mirrors? The answer is no. Instead, you just wash the dirt away.

“ChatGPT has picked the patterns of what humans say, and how we respond to certain queries,” he says, pausing. “The things that concern us – they’re an opportunity to take a good look at ourselves.”

Self-reflection is important for understanding security risks. The internet is saturated with bias, and ChatGPT trawls through this information to answer prompts. So, excluding diverse opinion and experience from scientific exploration not only causes the obvious ethical ramifications, but the product itself is of poorer quality.

As I offer the word ‘intersectionality’, Omar enthusiastically nods in agreement. He tells me about some recently acquired internship funding tailored to people from non-traditional backgrounds. The opportunity stems from a partnership with the Royal Academy of Engineering, sponsored by Google DeepMind and the Hg Foundation, and will facilitate final-year students wanting to pursue AI-related careers.

Dried coffee rings have formed at the bottom of cups, and conversation draws to a natural close. The knots in my stomach over ChatGPT are seemingly untangled, however, one thought still persists. Where is the line drawn in ChatGPT exploration? Or rather, just because we can – should we?

Omar is caught off guard at the turn, and takes a minute to collect his thoughts. He replies, the opinion organically forming in front of him, “I don’t think we should be very hard on preventing curiosity and ingenuity.”

Though noting the difference between the individual and something of public concern, he maintains the need for scientific freedom. “I hope that we don’t get to the point where we forbid people from exploring or experimenting things, because good things can come out of them.”

Omar departs with a smile, and the neighbouring table – mathematics postgraduates – are now in heady debate. One person says they don’t trust the quality of ChatGPT’s information. Another says it’s useful in finding citations. A third critiques the inevitable cost increase after the free training periods end.

It seems that among experts and everyday people alike, sharing our concerns on this one-way journey is a good starting point. As Omar said, “AI is going to make its way into more aspects of human life – that’s inevitable. But I don’t think it has to be a fatalistic outcome. I think the outcome is really up to us.”


“AI is going to make its way into more aspects of human life – that’s inevitable. But I don’t think it has to be a fatalistic outcome.”