How will AI be regulated in the UK?

Be the first to comment

How will AI be regulated in the UK?

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

AI is impacting all sectors across financial services - capital markets, investment and retail banking; you name it, it’s algorithm-ed it. If you are seeking out a financial product, mortgage or de-materialized instrument, AI is likely in the mix.

Last year NVIDIA reported over 75% of companies employing at least one of the traditional AI use cases of machine learning, deep learning, and high-performance computing.

So, AI is already well embedded but is it embedded well?

Over the past few years, each time a Financial Services Bill passed through the House of Lords I proposed two additions. One that looked at ways to ensure the ethical deployment of AI in financial services organisations and a related suggestion for a designated AI responsible officer in every firm. 

AI, and specifically generative AI, will be a gamechanger: clear, coherent, right-sized regulation will be essential.

Banks might be clamping down on the use of ChatGPT by employees, but they are also adding AI to 30% of job adverts. Bloomberg has introduced BloombergGPT, a 50-billion parameter large language model, purpose-built from scratch for finance.

What had been a steady stream of development has become a torrent.

Financial services organisations are continuing to use traditional forms of AI but are also exploring and investing in generative AI, amusingly characterised by Simon Taylor as a “primordial soup of development and chaos”.

Recent reports by McKinsey[i] and Bain Capital explore the opportunities - predicting the impact on productivity could add trillions of dollars to the global economy and showing how financial services organizations can leverage generative AI to fulfil their potential for customers, employees, and shareholders.

Whereas traditional AI can be used for first order analysis, rules-based sorting, classification, and Q&A engagement, it’s not sufficient to cover all edge cases, respond to never-before-seen problems or exercise judgment.

Generative AI can do just this. It’s a completely new platform, a new way of communicating with technology, an interface that simplifies complexity and holds incredible promise for transforming the customer experience. It has the potential to make financial service bots more ‘emotionally available’ to customers.

The current financial services system is expensive as a consequence of its construction.  No-one would argue against good governance and compliance although, ironically, it means the industry is all too often experienced by the customer as high cost and low satisfaction.  Generative AI, correctly considered and deployed, could reverse this.

For as long as I have been thinking about AI, I have been campaigning to improve financial inclusion and financial literacy. With generative AI we have the opportunity to, potentially, serve people better, financially include and enable. 

Whilst being hopeful about the possibilities I am equally aware of the need to ensure the technology is being used responsibly and ethically. In continuing to put forward my FS Bill amendments on AI (ethical deployment and an AI officer) I have argued that the health of our financial services and fintech industry and the quality of our regulators makes this an ideal place to start.

Some respondents to our recent report on ethical AI felt that the sector was already adequately served. For example, the FCA said that the new consumer protection rules that will come into force in July would allow them to act to punish harms to customers from AI systems.

This may well be true, but will it be sufficient? Part of the thinking behind my amendments is that they would also act as a way of building expertise and a talent pool in such an important sector.

Responsible operators will always act with a sense of the risk environment and a thought to potential future regulation and follow that by putting in place practices that will take a commercial view on insulating the business as well as possible.

In Europe, the AI Act is the first piece in a series of complex legislation designed to improve transparency around its development, but which may slow its use. In the US, by contrast, there is a much lighter touch approach.

In the UK we have a strong history of regulating new technology – in particular the establishment of sandboxes to experiment in a safe and responsible way. Essential concepts underpinning regulation must include transparency, ‘explainability’, fairness, and safety.

Whilst I think more could be done, I welcome the Government’s latest pronouncements on regulation. During our most recent Lords debate the Minister responded to my AI proposals by saying

“The FCA, the PRA, and the Bank of England recently published a discussion paper on how regulation can support the safe and responsible adoption of AI in financial services. Last week, the Government announced that the UK will host the first major global summit on AI safety this autumn.”

I would also flag the recommendations from our report ‘Guardrails and Catalysts to make AI a force for good’ that are centred on an international approach, governance, and culture change in the UK (including an approach to public engagement utilising ‘people’s panels’) and proposals for targeted legislation and leveraging standards and procurement.

We have such a moment in time, individually, nationally, globally, to get this right. As with all the new technologies, AI is in our hands to mould, to collectively, connectively, creatively make it a success.

Comments: (0)

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.