spot_img
Wednesday, October 16, 2024
HomeFeaturedFintechs Navigating AI Growth 

Fintechs Navigating AI Growth 

AI in fintech is set to grow rapidly as it helps improve security, streamline processes, and meet strict regulations. Companies like Lucinity and Gilion are on the frontlines, confronting major challenges such as maintaining transparency, addressing ethical concerns, and staying ahead of complex regulations. Their strategic use of AI isn’t just about gaining an edge; it’s about staying relevant and trusted in a changing landscape. 

Cutting-edge technologies like AI are becoming more essential to finance, offering cost reduction, streamlined financial management, and increased earnings for businesses and clients. The exact figures vary from survey to survey, but the conclusion is clear: AI in fintech is projected to grow significantly over the next five years.  

AI’s ability to prevent fraud and cyberattacks is a key market driver, as customers prioritise secure banking experiences. Meanwhile, companies and startups in the industry are actively developing next-gen AI solutions.  

Lucinity leverages generative AI and machine learning through its Generative Intelligence Process Automation (GIPA) framework to revolutionise financial crime prevention. Key components include Luci Studio, a no-code platform for creating and deploying AI skills, and federated learning for privacy-enabled algorithm improvement. 

“The framework offers customised AI solutions that learn and improve over time. This helps financial institutions streamline operations, meet regulatory requirements, and increase efficiency,” says Theresa Bercich, CPO and Co-founder of Lucinity. 

“We must ensure that users can trust the AI’s outputs and understand the rationale behind its decisions,”
Theresa Bercich, CPO and Co-founder of Lucinity.  

As AI systems become more central to decision-making processes, the need to mitigate risks like algorithmic bias and ethical issues has grown significantly.  

“To avoid biases, we must ensure that the training data is diverse and representative of different populations. Second, regular audits are conducted on AI models to detect and correct any biases that may develop over time,” Bercich says.   

A third key component, explainable AI techniques, are implemented to make AI decisions transparent and understandable.  

“We must ensure that users can trust the AI’s outputs and understand the rationale behind its decisions,” says Bercich.  

Don’t miss out: Catch the latest Nordic Fintech Magazine here

THE RISE OF XAI 

Explainable AI, or XAI, techniques provide insights into how AI models aim to make the decision-making processes of AI systems more transparent and understandable to humans. 

Several techniques collectively help in understanding and trusting AI decisions. The more common methods include feature importance, which shows which inputs affect decisions the most, and visualisation techniques like heatmaps illustrating input-output relationships. Also, simple surrogate models can mimic complex models, and counterfactual explanations show how input changes can alter outcomes, and some models, like decision trees, are naturally easy to interpret. 

The need for explainable XAI is multi-faceted and varies based on the audience including end users, AI developers, and product engineers. End users require trust and reassurance through understandable processes and feedback. AI developers must grasp current model limitations to validate and enhance future versions. Product engineers across various domains need access to and optimisation of decision explanations to effectively deploy AI systems in real-world environments. 

Lucinity addresses transparency issues in AI by implementing explainable AI techniques that provide clear and understandable explanations of decisions. The aim is to build trust and ensure users can comprehend AI outputs.  

“We maintain comprehensive audit logs and traceability, allowing every AI decision to be tracked and reviewed, further enhancing accountability,” Bercich says.  

Research from McKinsey shows that companies achieving significant returns from AI – those attributing at least 20 per cent of EBIT to AI – are more likely to follow best practices for explainability. Additionally, organisations that build digital trust with consumers by making AI explainable are more likely to see annual revenue and EBIT growth rates of 10 per cent or more. 

“We prioritise transparency and understandability in our platform, designing with data experience at the core, which is essential for us and our customers,” says Elin Bäcklund, CTO at Gilion.  

Gilion is a Swedish fintech that offers real-time growth loans and analytics to help businesses accelerate their growth.  

“Our customers rely on our advanced forecasting capabilities to plan scenarios, fundraise and monitor strategic bets. At the same time, we’re using the platform’s capabilities for our financial decisions regarding millions of euros. So, having precise, unbiased models is an essential part of our business model. The same precision as well as transparency that our own team needs, we give back to the founders,” Bäcklund explains.  

“Many of us have been using AI to support investment decisions since long before the rapid AI development started a couple of years ago.”
Elin Bäcklund, CTO at Gilion.

DEALING WITH BUILT-IN BIASES 

The finance industry is rife with inherent bias, evident in the overrepresentation of certain types of founders and companies in equity-funded data sets, according to Bäcklund. 

“Fundraising has been a game of knowing the right people and making the right pitch. Money is inaccessible, and the funding process lacks transparency,” she says. 

One often-cited example of algorithmic discrimination in the financial sector pertains to credit decisions, where automated systems magnify historical trends or exclude certain demographic groups because of the data they were trained on. 

AI systems are only as reliable as the data they are trained on: incomplete or unrepresentative datasets can compromise AI’s objectivity. At the same time, biases within development teams may further reinforce these biases in the system. 

Don’t miss out: Catch the latest Nordic Fintech Magazine here

By employing a systematic, data-driven approach to investment decisions powered by AI, Gilion aims to minimise unconscious biases and enhance the accuracy of company analysis. 

“It’s much easier to do bias training or diversify our portfolio when AI plays a part in our decision-making. Our scoring models can be tweaked, and we can clean our data sets to make them more representative. It’s much harder to iterate a human gut feel,” says Bäcklund.  

COMPLIANCE EVOLUTION 

Regulations and legislation in the Nordic region, such as GDPR and the proposed EU AI Act, heavily influence AI startups in the fintech industry by enforcing strict standards on data privacy, transparency, and ethical AI use. These regulations require fintech companies to prioritise compliance from the outset, potentially increasing costs and time to market.  

However, they also provide a competitive edge for companies that can meet these high standards, fostering trust and credibility in a region that values responsible AI practices. 

“The Nordic region’s emphasis on sustainability and ethical business practices aligns well with our commitment to responsible AI. The generally favourable regulatory environment supports innovation while ensuring robust data protection and ethical standards,” says Bercich.  

However, regulatory hurdles do exist.  

“Navigating the varying regulations across different jurisdictions can be challenging. As AI technology and its applications evolve, ensuring compliance with new and existing regulations requires continuous monitoring and adaptation,” Bercich adds.  

Both Bercich and Bäcklund agree that a key advantage has been the availability of a highly skilled talent pool in AI, machine learning, and financial technologies. 

“The innovation-friendly environment has attracted tech talent and contributed to a richer tech scene, which has been imperative for us in building our tech team and reaching our initial customers,” Bäcklund says.  

The Nordics accounts only for 2 per cent of global AI talent, as noted Silo Research and OECD’s findings from 2022.  

To address this, the private and public sectors in the Nordics have developed innovative educational programs to nurture digital and tech talent, such as Finland’s “AI for Built Environment” certification and Sweden’s national “AI competence for Sweden” curriculum, along with the “Elements of AI” online course available across all Nordic countries.  

It’s increasingly important for companies to bridge gaps in AI talent and skills, and according to a report from Accenture, more Nordic companies now have a defined AI talent strategy that includes protocols for hiring data scientists and domain experts, while many are also developing new strategies to collaborate and maximise value from data science capabilities. 

STAYING AHEAD OF THE CURVE 

A report from Accenture states that only 6 per cent of Nordic companies have successfully integrated AI into their core operations and strategies, achieving significant business outcomes as a result, compared to 12 per cent of companies in Europe. 

Fintechs like Gilion and Lucinity, are leading in developing cutting-edge solutions transforming the financial industry. Their ability to leverage advanced technologies like machine learning and data analytics gives them a significant advantage in creating innovative products that meet the evolving needs of consumers and businesses.  

Staying ahead of the curve will require them to not only refine their existing technologies but also explore new opportunities, address emerging risks, and keep pace with regulatory changes. 

Don’t miss out: Catch the latest Nordic Fintech Magazine here

“Many of us have been using AI to support investment decisions since long before the rapid AI development started a couple of years ago. We feel compelled to lead the advancement and innovation of the field. We’re constantly experimenting with the latest technologies within AI to stay at the forefront,” Bäcklund says and adds:  

“But we’re keeping our strategy of applying these new models where it makes sense. LLMs are outstanding at certain tasks, like making sense of hundreds of PDF documents with financials and turning them into structured data. So, we can put these models to work on isolated tasks, which makes our humans work more efficiently and leaves room for important analysis and decision making.”  

Given the rapid advancements in AI, Lucinity avoids potential negative impacts while continuing to attract investment through several key strategies. 

“We prioritise ethical AI development by implementing explainable AI techniques, ensuring our systems are transparent and accountable. Regular audits and comprehensive audit logs help detect and correct biases, maintaining the integrity of our solutions,” Bercich says.  

Lucinity’s UI and UX differentiate between AI-generated content and human actions, ensuring transparency. The UX is designed for expandability, allowing users to easily customise AI skills to meet specific needs. 

“We maintain open and transparent communication with investors, demonstrating our commitment to ethical AI and long-term sustainability,” Bercich concludes.  

Jakob Lindmark Frier
Jakob Lindmark Frier
Jakob is the founder of and partner @ TechSavvy Media and currently works at Digital Hub Denmark. As an editor he has covered tech and startups in Denmark over a decade, and he has previously had the pleasure of spearheading the Copenhagen Fintech Magazine as editor in chief.
Fintech Events in the Nordics 2024