AI: promises and perils

1st December 2025 | McInroy & Wood |

Introduction

The fluid, fast-moving and often conflicting narratives around artificial intelligence (AI) are challenging for all of us. In the popular imagination, the pendulum swings between exciting opportunity and existential threat with dizzying speed.

But what’s hard fact, and what’s speculative fiction? And how should we think about AI as we look to the future? We hosted a recent talk with Professor Shannon Vallor to help us form a coherent picture of the promise and perils of AI.

Professor Vallor is the Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute. She started by observing that AI threatens us not as some kind of malign robotic predator but from within our humanity itself. She then punctured some of the myths and misconceptions around AI, before setting out two contrasting visions of our AI-enabled future, the current risks arising from its use and the course correction that’s needed to ensure that AI works for us all.

Myths and misconceptions

As Professor Vallor pointed out, AI is not one thing. There are many different kinds of AI tools, and these are not limited to large language models (LLMs) or generative AI (GenAI).

Nor is GenAI innately superior to human beings. Ultimately, GenAI is a mirror, not a mind: it reflects the patterns in the data that’s fed to it.

So although these tools can do many things that humans do, they also can’t do many things that we can. The idea that they will replace or surpass us doesn’t hold up. For now, and for the foreseeable future, AI tools have no minds or motives of their own.

In fact, AI tools simply reflect and magnify human behaviour and errors. They are trained on human data and are just as fallible as it is, if not more.

And while GenAI tools are improving, their well-attested capacity for fabrication – making things up – appears integral to how they operate. They are also hard to improve consistently. When improvements are made in some areas, LLMs often develop greater problems elsewhere. We simply don’t know how much room is left for improvement.

Forking paths

So what can AI offer us, and where do the main risks lie?

Professor Vallor sees the greatest potential in AI’s ability to amplify and extend human abilities. A striking example is Google DeepMind’s AlphaFold. It has decoded colossal numbers of protein structures with a speed that would be impossible for human scientists. But it doesn’t replace scientists’ knowledge of proteins or biomechanical systems. Instead, it extends them. AlphaFold’s focus is narrow, and its limits are well understood. It’s very good at one thing, and that’s all it does.

That contrasts with LLMs. Unlike AlphaFold, these tools can do many things. But that versatility leaves room for abuse, errors and unpredictability.

How, then, might our AI future play out?

When used productively, AI has huge promise for humanity. It could help us reduce scarcity, restore trust in institutions, cut working hours, foster cooperation through translation tools and renew our faith in the future.

But current political and economic incentives threaten to put these positives out of reach. Instead, we may have to contend with AI’s considerable negatives. These include unlimited demand for resources, security risks, an unsettling pace of automation and the unpredictable and unsafe behaviour of AI agents.

There are also insidious threats to how we perceive our world. Fakery generated with AI can pollute discourse in science and the media and exacerbate political polarisation. Chatbots can engage in manipulation and cause psychological harm; and humans can be deskilled, losing confidence in their own abilities. These trends threaten to amplify inequalities, erode democracy and expand authoritarian state control.

Risks and course correction

As AI companies have increasingly shrugged off their responsibility for the ethical risks, these burdens are now falling on highly regulated industries, including finance. We’ve already seen individuals and institutions get into trouble as a result of LLM fabrications. Air Canada provides a recent example; it lost a court case after a chatbot promised discounts that the company did not fulfil.

Meanwhile, there is already growing evidence of harm – from employee deskilling to ‘workslop’ to consumer mistrust, security risks and psychological injury. This evidence is amassing much faster than any clear cost savings or productivity improvements.

But what could force a change of course? There are some hopeful signs. Evidence of an AI investment bubble is growing, so leaders are looking for more realistic assessments of AI’s promise – especially after a recent MIT study found that 95% of companies have not seen any return on their AI investments. Meanwhile, the public want more AI regulation, not less. These factors could short-circuit the hype cycle, giving us an opportunity to restore the ethical guardrails needed for safe, reliable and trustworthy AI. But while we wait, AI is already putting considerable stress on our institutions, shared sense of reality and confidence in our own judgment.

The future is not yet written

Although AI won’t try to destroy or enslave us, it does threaten to sap our sense of our own potential – to plunge us into, in Professor Vallor’s words, a “vicious cycle of helplessness”.

But we don’t have to accept this. Although the dominant AI narrative stresses its inevitability, that’s designed to ensure that those who are currently choosing AI’s future remain in the driving seat. If we adjust the perverse incentives that currently prevail, we can look forward to a better kind of AI – and a brighter future for us and our descendants.

At the same time, we need to avoid losing our faith in technology. We’ve already seen the harm caused by a loss of faith in medicines – especially vaccines. So rather than turning our backs on technology, we must ensure that we benefit.

For investment managers like McInroy & Wood, Professor Vallor’s insights give us a great deal to think about. We recognise the potential of AI as a transformational technology, but we’re also wary of the risks. Alongside the adverse ethical and societal outcomes that AI could entail, we also see the risks of overinvestment, circular spending and bubble-level valuations.

For that reason, we take only selective exposure to the AI theme. Professor Vallor outlined the ways in which a correction in many aspects of AI would be a good thing. For us, that’s all the more reason to be cautious as we focus on achieving the best outcomes for our clients.


The views expressed in this document are not intended as an offer or solicitation for the purchase or sale of any investment or financial instrument. The information contained in this document does not constitute investment advice and should not be used as the basis of any investment decision. References to specific securities are included for the purposes of illustration only and should not be construed as a recommendation to buy or sell these securities.

Get our regular insights direct to your inbox

Subscribe

Latest Insights

Speak to one of our team

Call
+44 (0)1620 825 867
Email
enquiry@mcinroy-wood.co.uk