Generative AI and AI outlook 2024: Customer journeys, fraud and legislation

Be the first to comment

Generative AI and AI outlook 2024: Customer journeys, fraud and legislation

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

Artificial intelligence (AI) and generative AI saw an immense boom in 2023, with some calling it the technology’s breakout year.

What was once viewed as something of science fiction is coming closer to reality, with the abilities of the software developing at pace.

Yet how is the technology going to progress throughout the year in the financial technology sector? To find out we spoke to Prakash Pattni, managing director - digital transformation for IBM Cloud for Financial Services, Adam Lieberman, head of AI and machine learning at Finastra, and Sam Li, co-founder and CEO of Thoropass.

AI customer journeys

The benefits AI will offer to customer journeys in financial service was one of the first areas highlighted.

Pattni focusses on the role of personalisation: “I think you’re going to see an increase in adoption across the industry. One area is really around the personalisation of the customer journey and services. We’ve been talking about contextual banking for a number of years. I think with AI maturing as fast as it is that’s becoming  a lot more of a reality.”

From IBM’s perspective, they launched the chatbot Cora+ with NatWest towards the end of last year. They’re seeing chatbots using AI as a place for development, with automation being used to track customer sentiment through chatbots, but also internally for HR systems.

Efficient AI 

The efficiency AI systems can provide financial services was something Lieberman focused on.

He comments: “From an enterprise standpoint it's really bringing kind of the productivity, the creativity and efficiency enhancements that we see kind of with everyday tasks, writing emails, putting together PowerPoints and word documents. Really making an enterprise extremely efficient, so that we can transfer our work to things that are a little bit more difficult or the more challenging tasks.”

Lieberman states that two areas they are focusing on within generative AI are assistants or co-pilots and large language model analytics.

He argues that there is a misconception around large language models that there needs to be a lot of data to fine tune them for specific tasks.

He says: “Large language models are also a great medium to access existing tasks. For example, a mortgage product for a loan officer. They may have a desktop setup or web application that they use, and there are different screens that they can click around in and enter in information so that they can access a pre-approval letter, or they can check the best rates that a customer may be able to get on their application.

“It's so much more convenient to use chat, and I think we've seen that in the past with chatbots. Now with large language models, we can understand incoming responses or incoming text, and route them kind of to the right agent that can access already existing services that we have in our application.”

Regarding his natural language analytics, Lieberman notes that we’re often very accustomed to static data products. He states: “We have charts that we can see we can do our manual analysis, but if we have databases of information we can just start asking questions over it. I think it redefines the data products that we see today and makes data analysis much easier.”

Fraud and financial crime

As with any newer technology, the more financial services use it the more risk is created.

Li discusses the risks around fraudsters using AI to create synthetic identities and bypass KYC systems, a risk which many financial crime analysts are concerned about, as seen in my FinCrime Outlook 2024.

However, Li also highlightes an issue of fraudsters “injecting” into AI chatbots which have been trained on proprietary data.

He explains: “The hackers would try to trick the conversational AI into revealing proprietary information that was only supposed to be used for training or fine tuning purposes, but potentially got leaked through a conversational interface.”

Pattni brings in the point that despite AI and generative AI creating some of the fraud risks, it is also held to mitigate them. He states that banks are often dealing with very high levels  of data to analyse where they can miss suspicious transactions, but “with AI and the capabilities that we’ve got we’ll be able to look at much higher volumes of transaction and therefore be able to reduce fraud.”

Li brings back Lieberman’s point on efficiency, as he states that AI and generative AI can also aid in the workflows of traditional compliance teams. He comments: “Traditionally, an analyst or auditor spent hours reading policies, looking at password requirements, data retention timeline, those things. AI is very good at basically summarising or extracting information that is relevant from long text forms.

“AI can really speed up and empower the compliance teams, which never get enough budget, to punch above their weight.”

AI legislative outlook

There are some key pieces of legislation coming this year or in development this year which are likely to impact AI and generative AI this year, most notably the EU AI Act, but other bills such as DORA will also apply.

Regarding the stage AI is at with legislation, Leiberman comments: “We've kind of democratised the technology. Everybody is knowingly interacting with it right now. We are uncovering a set of these new responsibilities. To make it globally safe and beneficial we need to find the balance between being responsible, but then also allowing ourselves you know, the runway and the freedom to experiment to make that technology better.

“When it comes to legislation we don't want to dampen the spirit of innovation, but at the same time, we do want to be responsible to push the technology in the correct direction.”

Li shares a similar sentiment with regard to balancing innovation and protection. He adds: “There's also a particular focus on high-risk use cases, such as deep fakes because it creates risks around synthetic fraud. There’s a lot of focus on bias. If decision-making is being done by AI exclusively, there may be bias discrimination. And a lot of eyeballs on fake news and things that are related to democratic processes and elections.”

Pattni put the point across that looking at how AI will be looked at in the future is may become similar to some of the concerns which have been raised in the past around cloud infrastructure and some of the disaster systems that exist there when one potentially goes down.

He comments: “As AI becomes more embedded in financial processes and systems, if  something happens, how do we mitigate against that?”

For Pattni this may end up looking similar to some of the multi-cloud solutions already being used, but in an AI context. 

 

Channels

Comments: (0)

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.