How to avoid generative AI hallucinations

Be the first to comment

How to avoid generative AI hallucinations

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

Generative AI has dominated headlines for some time now, especially recently with news of Sam Altman, the CEO of OpenAI and the developer of generative AI platform ChatGPT, being ousted and rehired within a week. But will generative AI be a technology that banks use long term? According to Karan Jain, CEO, NayaOne, it absolutely is. Jain highlighted how in his 20-year career this is “the fastest banks have leapt out at a piece of technology.”

Jain continued: “Banks are not comfortable enough with generative AI to put it in front of customers, or have it interact with their customers, but they are trying anything and everything they can to see what value the technology can bring. It’s almost become an arms race between the CEOs and CIOs.” Comparing the generative AI boom to the emergence of digital assets – a trend that, according to Jain, “fizzled out very quickly,” he described the adoption of generative AI as “different.”

As Jain pointed out, the current climate is focused on “cost-cutting productivity” and this is of paramount importance because of what generative AI promises, and a few banks are already putting the technology to good use. While we are still in the early stages of uptake, it is questionable whether banks are doing enough to explore the perceived benefits of generative AI. However, there are still concerns around AI, particularly when it comes to AI hallucinations, where a generative AI model generates inaccurate information as if it were correct.

We spoke to Jain, Stuart Davis, executive vice president, internal data protection management, Scotiabank and Christian Wolf, head of strategic partnerships and ecosystems, Raiffeisen Bank International about the benefits and limitations of generative AI, the risks and ethical concerns that are associated with generative AI development, and what needs to be considered before deploying generative AI in a risk-free manner?

Are banks doing enough to explore the perceived benefits of generative AI?

According to Davis, while AI has been around in various forms for decades, “Only now that we have significantly enabled the way computing power is delivered, has AI been brought back to the forefront.” David added that many industries have been using a form of generative AI for years, but it has been limited, providing the example of how a form of graph AI powers search engines. In addition to this, vector AI presents further opportunities of being able to search through audio and video.

Davis said: “It’s really the computing power, the graphics chips and the parallel processing, which has significantly shifted the industry and made generative AI available in a massive way, because now everybody has access to that computing power, not only on their laptops, but also via the cloud.” Diving deeper, he explained the differences between deterministic AI and probabilistic AI – and what it means for the future of generative AI in financial services.

“Deterministic is where you're allowing the AI to follow a set of defined instructions which define how the data relationships are built. There are no random associations and connections.

“Then there's probabilistic AI, which is AI that tries to pick up commonalities and associations on its own. Often, probabilistic AI is trained with massive amounts of information, that develops the frequency of relationships. How close is the word to the noun that follows it? That's an aspect of AI that self-trains, learns and develops its own table of frequencies and patterns, and builds a graph from that.

“Whenever you interact with the art of the graph, it can break down your question and then associate it back to the next best logical response, given the full context of everything that has already been generated, as well as what it's evaluating.

“That's where generative AI has amazing possibilities. At the same time, we have to develop restrictions so that it's operating in a way that is appropriate, doesn't have bias, doesn't give you information that could be nefarious, or criminal in nature. When I relate this to banks, the domain space that we're in and the evolution of AI, we're just beginning. I believe appropriate risk management of AI is the next step,” Davis added.

Increased risk management could decrease generative AI hallucination, where the large language models make up answers in a way that they sound factual. However, hallucinations only occur if the models are trained largely from information scraped from the Internet or suffer from knowledge cutoff – not aware of any post training knowledge. This may not be a problem for banks leveraging their own data.

Wolf had a similar view to Davis and stated that “AI is here to stay. Up until now, AI has used vast amounts of structured data to derive conclusions, but it was not used to generate new content. I think this is the big promise of generative AI, that now you can use these models to generate new insights, generate new documents, pictures, videos, songs, etc. - and we believe this will be a strong efficiency driver.

“It will also help us create a better and more exciting customer experience, because AI can help us to customise our offering to a specific, maybe even to a single, customer. If we look at AI from a pure technology point of view - and I think the industry accepted benchmark here is the Gartner Hype Cycle - there is a good reason why generative AI is still at the peak of that curve, which is called the ‘Peak of Inflated Expectations’. There's a lot of promise, and there are a lot of expectations of the technology, but I think realistically we will need to take a step back and have a more rational view.

“As with any other technology that runs through the Hype Cycle, we will need to sort things out and perhaps dismiss surplus options again very quickly. My expectation is that within the year or so, we will have a much more realistic view on where it really can be used: we are currently exploring the areas of applicability for the technology in banking, and in January we expect to have outlined the concrete use cases with most potential. Additionally, there will be many more use cases unfolding as the technology progresses to mature, especially as OpenAI has announced a new set of functionalities, targeting developers,” Wolf continued.

What are the limitations of generative AI?

As with any type of AI, there are many limitations to generative AI. Hallucinations are just one such limitation. Jain agreed and said that limitations are driven by the early-stage nature of the technology. “The reason why it's an exploratory phase is because it's not yet proven out of the stack yet. As banks are regulated, it will take a while before generative AI starts making it to public use. Humans in the loop will be needed, in the same way they were when algo trading came out – it was a long time before we let the bots run on the trade alone. And even then, it was supervised by the human in the loop, and there is a kill switch.

“I see generative AI going down the same trajectory, although I do see it going faster because it's applicable to everything, not just trading. Generative AI is applicable to most parts of the bank and as banking technologists, we've all evolved, we know the risk and we're just more mature in terms of how we adopt emerging technologies.”

Wolf furthered that most banking applications “are based on large language models, and these require a massive amount of data for training. This massive amount of data from an internal banking point of view can be quite challenging to handle. It's a logistical issue. It's also a regulatory issue, especially under GDPR in the European Union.

“In addition, we still have that black box problem, which means that the AI decision-making process is basically not transparent, which is very important especially in a regulated environment such as banking. Decisions that we might derive out of these models could ultimately have financial implications for our customers, which means they need to comply with regulations.

“We must also be transparent about our decision making as we always treat our customers equally without any discrimination. However, these are the current limitations of AI, and if models are based on historical data, we see that we may not be able to control unforeseen scenarios. Also, when it comes to predictive scenarios, there are limited options, at least for the moment.

“I’m not saying that there is no potential to change, but data privacy and security issues persist and bias accountability for decisions is something that we need to address. Ethical concerns that are usually involved with transparency, fairness and potential misuse of AI are something that we need to investigate,” Wolf explored.

What are the risks and ethical concerns associated with generative AI development and deployment?

Following on from Wolf’s point around ethical concerns, Davis believes that a “risk-free generation of AI” needs to be created. For this, banks will need “a strong deterministic or rules component to any AI system. Number two, the system itself needs to be trained with data that you're legally permitted to use. And there must be an awareness of consumer consent laws and privacy laws. Nobody wants their personal data trained in AI, so banks are looking at that carefully and asking: how do we represent someone's relationships without putting personal information in there?

And I think that's the emerging field, it's not about the actual data anymore. It's about the relationships in the data. AI only knows the relationship in the data, and I think that's a cutting-edge component. My third point would be around ethics and how to build them into AI. A fourth consideration, would be the challenge of consumer protection, to make sure everyone has equal opportunity and are treated fairly and with respect.”

Wolf added that it is important to “directly address prevalent fears that a lot of people might have regarding the adoption of AI. There are the ones who fear that AI will lead to job displacement. While this might be a valid concern in certain industries, I strongly believe that it's important to emphasise that we do not see AI replacing, but rather augmenting the capabilities of our workforce in the financial industry.

“If we look at the use cases that we identified internally, I will dare to say that 99% of them concern repetitive and very mundane tasks, where AI would improve efficiency. Utilising AI in these use cases means freeing up the time for employees to focus on more complex tasks. The adoption of technology must also be combined with proper training and upskilling of the workforce.

“It's about developing new roles, which also come with new opportunities, such as increasing the numbers of AI specialists that manage these AI systems. I think that existing initiatives like data governance will also evolve because of data privacy and security issues that we discussed, and I also believe that we have a good foundation for the culture of learning and growth.

“All of this helps AI not to be perceived as a threat, but rather as an opportunity for employees. I'm very much looking into shifting that narrative from ‘AI as a job replacement’ to ‘AI as a job enhancement’ - ensuring we have a more motivated, productive workforce that leverages technology and embraces it, rather than fights it,” Wolf said.

What needs to be considered to deploy generative AI in a risk-free manner?

What does the future hold for generative AI? As evidenced by the views of experts within Scotiabank and Raiffeisen Bank International, there are many steps that need to be considered. According to Jain, as “leaders in the pack” move organisations, they will “cross pollinate” and in turn, adoption rates of generative AI will increase.

Beyond hallucination, data ownership and how to treat the product of that data, whether this can be monetised has been an age-old debate inside financial institutions. However, as the technology is now mature enough to be considered for external use – and regulations concerning AI and the end customer are coming to the fore – banks should not hesitate to leverage generative AI.

Jain stated that “waiting on the sidelines could mean long term strategic disadvantage,” and for banks, here are his recommendations:

  1. Carve out some space, time and resource for generative AI and buy in from the top.
  2. Bring legal, compliance and risk management along on the generative AI journey.
  3. Find a way of monetising the data that generative AI anonymises.

Jain concluded by saying that generative AI is a “necessary evil that you can’t ignore, so get the tools to help you and the team evaluate. The answer might be that generative AI is not for you, but it’s better to know than not know where it can be applied.”

Channels

Comments: (0)

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.