Banks have been using machine learning (ML) and artificial intelligence (AI) for decades to determine credit scores, predict trends, and spot fraudulent transactions. However, the emergence of a new type of AI—generative AI—caught even experienced data scientists off-guard with regards to its sophistication.
The emergence of GenAI poses numerous opportunities in the banking sector, particularly in the area of customer service. However, the new technology also enables sophisticated levels of attack that have never been seen before, forcing banks to urgently reevaluate internal processes.
“Not everyone is aware of the urgency. Financial institutions should take action now rather than later,” says Sophia Qureshi, Vice President, Fraud Solutions at Provenir, a decisioning platform for financial services providers.
Deepfakes and no-code phishing websites
The threats posed by the dozens of GenAI tools now available affect a bank both internally and externally. Internally, one of the immediate areas for attention is KYC processes, says Qureshi. Using freely available tools, numerous viral online posts have already demonstrated that automated KYC might soon come to an end.
“Not everyone is aware that an urgency exists. Financial institutions should take action now rather than later”
Sophia Qureshi
Vice President, Fraud Solutions at Provenir

Online tutorials even exist on how to use a popular AI image generator to generate an image of a person’s face against any background—such as the typical photos of a person holding a card with some text on it that are used in KYC verifications.
Voice AI scams are a growing threat, CBS News reported just a few months ago. The article details their experience with an online AI voice cloning tool that costs only $5 and requires just a small sample of a voice to generate voice content from any text.
“For example, in a news interview, you could take quite a good sample of how I talk, what words I use, and the intonation. You could then feed that into a generative AI tool, and it would be able to generate my voice to say whatever you typed,” says Qureshi.
ON-DEMAND WEBINAR: Elevate Customer Experience and Secure Trust in the Financial Services Ecosystem
Just before this article went to press, Thomas Krogh Jensen, the CEO of Copenhagen FinTech, participated in an experiment demonstrating the shocking accuracy with which anyone can create a full-scale video deepfake using only a voice sample, basic video footage, and the right GenAI software. The resultant deepfakes are remarkably accurate, and signs indicate that they will become even more so.
External threats
The other problem is that GenAI empowers any criminal to easily create phishing websites without any coding knowledge, and to write compelling phishing emails without any grammatical errors.
“You can tell the tools to write to a certain persona and use a certain style of language, and it does so incredibly well,” Qureshi tells NFM. “ChatGPT can also write code very well. Whereas, before, you’d require some level of skill for these attacks, now anyone will be able to do them.”
Where the solutions lie
Qureshi says that the first stage of combatting this growing problem is awareness. “I studied computer science and yet even I was surprised by the immense capabilities of this new technology. I believe that many in this space are perhaps not yet fully aware of how powerful it truly is,” she says.
ON-DEMAND WEBINAR: Elevate Customer Experience and Secure Trust in the Financial Services Ecosystem
That awareness extends to bank employees so they understand how believable text, voice, and videos created by generative AI might be. On the technological end of the spectrum, Qureshi envisions a battle of AI versus AI as banks start to use their own AI, trained on customer behaviour patterns, to detect deviations from those patterns.
The option opens up numerous ethical questions about how much customer data a bank should collect. These questions can’t be ignored, just as the growing threat can’t be ignored. “The main thing is that financial institutions should start thinking that they must have a strategy around this. Moving forward, they should look at everything in their processes and policies from behind that lens,” Qureshi says.