/artificial intelligence

News and resources on artificial intelligence systems, innovations and initiatives worldwide.

AI should be trained to respect a regulatory 'constitution' says BofE policy maker

Innovative AI models should be trained to respect a ‘constitution’ or a set of regulatory rules that would reduce the risk of harmful behaviour, argues a senior Bank of England policy maker.

2 comments

AI should be trained to respect a regulatory 'constitution' says BofE policy maker

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.

In a speech at CityWeek in London, Randall Kroszner, an external member of the Bank of England's financial policy committee, outlined the distinction between fundamentally disruptive versus more incremental innovation and the different regulatory challenges posed.

"When innovation is incremental it is easier for regulators to understand the consequences of their actions and to do a reasonable job of undertaking regulatory actions that align with achieving their financial stability goals," he says.

However, in the case of AI, innovation comes thick and fast, and is more likely to be a disruptive force, making it "much more difficult for regulators to know what actions to take to achieve their financial stability goals and what the unintended consequences could be for both stability and for growth and innovation."

Kroszner suggests that the central bank's up-and-coming Digital Securities Sandbox, that will allow firms to use developing technology, such as distributed ledger technology, in the issuance, trading and settlement of securities such as shares and bonds, may no longer be an applicable tool for dealing with artifical intelligence technology.

"Fundamentally disruptive innovations - such as ChatGPT and subsequent AI tools - often involve the potential for extraordinarily rapid scaling that test the limits of regulatory tools," he notes. "In such a circumstance, a sandbox approach may not be applicable, and policymakers may themselves need to innovate further in the face of disruptive change."

He points to a recent speech by FPC colleague Jon Hall that highlighted the potential risks emerging from neural networks becoming what he referred to as ‘deep trading agents’ and the potential for their incentives to become misaligned with that of regulators and the public good. This, he argued, could help amplify shocks and reduce market stability.

One proposal to mitigate this risk was to train neural networks to respect a ‘constitution’ or a set of regulatory rules.

Kroszner suggests that the idea of a ‘constitution’ could be combined with, and tested in, a sandbox as way of shepherding new innovation in a way that supports financial stability.

"In the cases where fundamentally disruptive change scales so rapidly that a sandbox approach may not be applicable, a ‘constitutional’ approach may be the most appropriate one to take," he says.

Discover new challenges and opportunities artificial intelligence brings to the banking sector at Finextra's first NextGenAI conference on November 26 2024. Register your interest here.

Sponsored [Webinar] How AI is re-shaping financial services

Comments: (2)

John Davies CTO at Incept5

It's a lovely idea but simply not feasible or even technically possible. It's like putting in back-doors for encryption, it's just not mathematically possible without fundamentally breaking cryptography. Firstly the LLMs or GPTs being referenced are global, so who's regulations to you build into the model? Secondly LLMs don't follow rules like that, they can be tuned towards a direction but like people they find loopholes and unless the rule is rock solid and unambiguous, which they rarely are, they simply won't work on LLMs. Censored LLMs, which is effectively what this is suggesting work badly, you're training it and then un-training it. We saw what happened with some of the recently censored models proposing black German soldiers in 1943, Native Indian founding fathers of the USA, all in the admirable effort of inclusion but failing. What is needed is better management of LLMs, UK and EU should be using privately hosted LLMs and then frameworks around those to assert compliance and adherence to regulatory practices. This is a hybrid of LLMs, RAG and traditional integration. It can be done but not in the way suggested here.

Jamie French Financial Technology at Vention

Oh dear, I dont think Mr Kroszner understands how these models work

[Webinar] AI and Synthetic Data: Fighting Financial Fraud and Protecting CustomersFinextra Promoted[Webinar] AI and Synthetic Data: Fighting Financial Fraud and Protecting Customers