How serious is the danger of AI?

Be the first to comment

How serious is the danger of AI?

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

Fear around AI continues to dominate headlines, and most recently, the Financial Times published an interview with Securities and Exchange Commission (SEC) head Gary Gensler, who stated that without AI regulation, it is “nearly unavoidable” that the technology will be the catalyst for the next financial crisis.

Referencing the challenge that most financial institutions would be using models that were not created within the regulated walls of a bank and were indeed outsourced to technology firms that were not under the same restrictions, Gensler added that an adjacent issue is that the same model would be used by all.

Gensler told the FT: “It’s a hard financial stability issue to address because most of our regulation is about individual institutions, individual banks, individual money market funds, individual brokers; it’s just in the nature of what we do. And this is about a horizontal [matter whereby] many institutions might be relying on the same underlying base model or underlying data aggregator.”

He continued: "If everybody’s relying on a base model and the base model is sitting not at the broker dealer, but it’s sitting at one of the big tech companies. And how many cloud providers do we have in this country?" This reliance on one model was also brought up across the pond at a debate held at UK Parliament that welcomed leaders from the fintech and AI sectors to discuss the technology ahead of Prime Minister Rishi Sunak’s global AI summit in November.

At the event, Jamie Beaumont, founder, Playter mentioned that "One of the biggest problems I find is regulating something that you don't know is very hard. The trajectory of AI has gone crazy over the last year, when we don't know the next step or where AI will be in a year, how do we regulate?" Dominic Duru, co-founder of DKK Partners added, "If you can't understand you can't regulate. The businesses are the ones that understand what needs to be stopped."

What is AI and are there plans for the technology to be regulated?

As Luigi Wewege, president, Caye International Bank said in a Finextra blog, “artificial intelligence (AI) is reshaping our industries and redefining the way we conduct business. The banking sector is no exception.” He continued to say that while other industries are seeing AI as an opportunity, financial services must do the same to keep pace, and prioritise transforming their operations, customer experience and fraud detection.

However, with such a powerful technology, as mentioned above, regulation must be considered. In July 2023, the European Parliament released information on the world’s first comprehensive AI law, the AI Act. “As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy,” the European Parliament said in a statement.

This follows the European Commission’s first proposal of a regulatory framework for AI in 2021, which explored how AI systems that can be used in different applications should be analysed and classified according to the risk they pose to users. After this is done, and the risk is aligned with the level of regulation needed, the AI Act will come into force.

For instance, unacceptable risk AI systems that are considered a threat to people and will be banned include cognitive behavioural manipulation of people or specific vulnerable groups, social scoring, or real-time and remote biometric identification systems, such as facial recognition. The aim is to reach an agreement by the end of 2023.

According to the European Parliament, “Generative AI, like ChatGPT, would have to comply with transparency requirements: disclosing that the content was generated by AI, designing the model to prevent it from generating illegal content, publishing summaries of copyrighted data used for training.”

What is generative AI and how can banks leverage the technology?

In May 2023, G7 national officials also met to discuss the challenges posed by generative AI tools like ChatGPT. This meeting followed an agreement to form the intergovernmental forum called the ‘Hiroshima AI process’ to debate growing use of AI tools. The G7 AI working group work in cooperation with the OECD and the Global Partnership on AI (GPAI) to provide suggestion for heads of state by the end of 2023.

According to Ernesto Funes, founding partner, Stratio BD, “The impact of generative AI will leave no industry unscathed,” but perhaps in a positive way. He went on to say that: “For the finance industry, the potential here too is endless – and not just on internal matters such as task automation, better fraud detection, and more personalised services. The customer experience stands to greatly benefit too, with ChatGPT-style tools that are able to intelligently and instantly understand and answer any question without the need to engage a bank employee.”

It could be argued that a bank’s biggest problem is access to high quality data; while a financial institution has a wealth of data, most of the time it is of poor quality, is siloed and stored in numerous locations. With generative AI and good data architecture, as Funes explored, data has the potential of being handled effectively at scale and in real time. But is generative AI dangerous?

In Finextra’s view, the best thing to do is analyse the actionable insights. The financial services industry must continue to handle data carefully and securely, and examine the information that a generative AI, or any AI model provides. A little bit of scepticism can go a long way. As the European Parliament advises, as long as the models comply with transparency requirements, disclose that the information was generated by AI, and is prevented from publishing illegal content, generative AI is not dangerous. However, regulation will need to come into force to ensure this happens.

Is AI dangerous?

As discussed, under the EU’s proposed regulation, AI will be classified into levels of risk: minimal risk, limited risk, high risk, and unacceptable risk, with products and services that come under the latter being banned. While this is a European initiative, discriminatory practices where residents of certain areas, or of a certain race or ethnicity were denied, that originated in the US, will be called into question.

According to Evgeniy Ivantsov, chief marketing officer, FYST, while regulating discriminatory practices is good, “payments and fintech firms could find themselves in a quagmire of legal and ethical conundrums that could slow business growth – or force them to abandon some services altogether. The potential for misinformation to be regurgitated through customer service AI touchpoints like chatbots is enough to keep any risk or compliance officer awake at night. Given that generative AI can only present information that was fed into it, how can an algorithm verify whether that information was biased or incorrect, and gave the customer the right service?”

Further to this, “the introduction of Consumer Duty rules in July 2023 could potentially see AI-generated product or service pricing or communications breaching requirements if they result in poor outcomes for retail customers, particularly those in vulnerable groups like the elderly or low-income households.”

While the technology itself is not dangerous in terms of adoption, it is evident that before widespread use, regulation will need to be established and the financial services and fintech industry will need to adapt fast.

Channels

Comments: (0)

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.