What would you do if you had a robo sidekick that could accompany you in every aspect of your job? With recent advances in AI, this future isn’t so far away.
“In the realm of finance, AI stands to be a game changer,” says Harjot Ghai, Chief Operating Officer at Delbridge Solutions. “It can streamline financial processes by automating data analysis, optimizing resource allocation and improving forecasting accuracy.”
However, AI also has inherent risks—flying directly in the face of finance teams’ mandate to minimize their company’s exposure to risk.
Misinterpreting AI's advice or relying too much on it to inform strategy without considering factors like market volatility and compliance risks, can lead to substantial losses.
In this article, we’ll explore the risks finance teams need to consider around generative AI and how to use these tools in a secure and compliant way.
In 2023 alone, generative AI for finance has grown by leaps and bounds. Here are some notable use cases:
On a closer look, you’ll realize there’s one thing each of these use cases and companies have in common: trademark tools and technologies. These companies are actively investing in bespoke, cutting-edge technologies to gain a competitive edge and address specific challenges in their sector.
The move towards in-house technologies reflects the unique complexities that make using generative AI within finance trickier than other areas of business. For instance:
It's because of these nuances that finance teams must be extra cautious about how they use generative AI, to avoid the worst of potential consequences.
In August 2023, iTutorGroup had to pay out $365,000 in damages for discriminating against older employees while hiring. The bias came in through the AI hiring software they were using—without sufficient manual screening, the software only recommended young, male applicants.
Financial institutions face even higher risk around AI tools that review large pools of applicant data, as it can mean discriminating against loan applicants, causing regulatory breaches in fair lending laws and inviting reputational damage if such biases become public.
In November 2021, Zillow geared up to wind down its Zillow Offers (a home-flipping enterprise) and lay off 2,000 employees based on inaccurate home prices produced by a home price-estimating algorithm.
The machine learning tool would take into account data about the property gathered from sources including tax and property records, homeowner-submitted details such as the addition of a bathroom or bedroom, and pictures of the house. It would then provide a ‘Zestimate’ on how much a house would sell for. This service, targeted at home-flippers, ended up grossly inflating house prices.
As a result, Zillow had to do a $304 million inventory write-down in Q3 2021, leading to mass layoffs and the eventual closure of its home buying business.
This is a warning sign for businesses in the asset evaluation space—and any company that intends on using AI to inform big business bets, really. Similar algorithmic estimates that stem from incomplete data sets or lack of understanding of market volatility could lead to significant financial misjudgments.
Navigating generative AI tools in finance involves striking a balance between its innovative capabilities and inherent risks.
It's essential for finance teams to be aware of these challenges as they dip their toes in the ocean of AI:
Despite the risks we outlined above, it would be foolish to reject AI’s advantages. With the potential to save hours of work on tasks like data formatting, modeling and interpretation, the business impact of AI is too strong to ignore.
To take advantage of AI today and five years from now, you need to start strategically building processes for assessing and deploying generative AI tools.
As you research AI systems to integrate into your finance function, here are four questions you should be asking:
Question | What It Evaluates |
What measures are in place for data security and privacy? (For example, encryption technologies) |
The security of your company's internal data or the data of your customers |
What is the system’s decision-making process? |
This is crucial to avoid the Black Box Conundrum where AI tools use circular logic and reach preposterous conclusions |
How recent is the data the AI model is pulling from? |
If the outputs you need depend on conditions that change often, this will be an important factor in whether you can trust the tool |
What is the vendor’s track record and experience in the finance sector? |
Industry-specific experience |
You’ll also need to be mindful of how of how these AI tools would integrate into your operations and tech stack and continuously monitor them to ensure you can trust the output you’re getting.
Here’s VP of Business Solutions at Marcum Technology Rob Drover’s advice on how to do this, as he tells Vena CFO Melissa Howatson on an episode of The CFO Show podcast:
If you use a machine learning model to make predictions, test that model in real life conditions to ensure that the predictions align. Because if the model is coded incorrectly, it’ll skew the predictions.
This is especially sensitive for businesses that do underwriting for mortgages, insurance, or other qualifications. You want to make sure the model is built carefully and you understand how it’s trained so it makes valid predictions and isn’t biased towards certain populations or income levels. (Like in the case of iTutor above).
Most importantly, understand how your AI model is built and how it reacts.
Consumer-grade AI tools like ChatGPT work on a system where you input information and get an output. What you need to ask is “where does that information go when I enter it?”
For example, what happens if an employee takes a client’s tax return and uploads it? Where does that information go?
Is that confidential information going to show up in someone else’s search next time they query the model?
Understanding where generative AI falls short is key to incorporating it mindfully into your tech stack. A few enterprise AI tools like Microsoft Copilot have ways of addressing the pitfalls we’ve outlined here.
Microsoft recently released Copilot, an enterprise-ready AI sidekick. It integrates with the Microsoft 365 suite including Excel, which for finance teams is particularly exciting as it offers the following features:
Copilot adheres to Microsoft’s AI principles, ensuring compliance with government regulations and strict data security measures, including:
Even with guardrails in place, the output from generative AI isn’t flawless. It needs human oversight.
In a research blog about GPT-4 (which fuels Copilot's LLM), OpenAI states, "despite its capabilities, GPT-4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it "hallucinates" facts and makes reasoning errors)."
Copilot, however, minimizes the likelihood of these errors by grounding prompts with additional business context from your Microsoft 365 apps.
With Vena Insights, you can get more from your data that’s already living in Vena through embedded Power BI and Microsoft’s best-in-class AI and machine learning technology.
You can take advantage of built-in AI features such as:
To get a closer look at each of these features, check out our blog, Advanced Power BI Features in Vena: 6 AI Tools You Need To Try.
In our view, finance professionals have a lot to gain from incorporating AI — enhanced speed, improved analytics and advanced risk assessment capabilities. But, only after thorough evaluation, gaining a deep understanding of the model’s decision-making processes and establishing clear guidelines on how these kinds of tools should be used.
Given the awe-inducing speed with which AI has grown in sophistication, getting this balance of innovation and caution right is something that should be on every finance team’s mind.
“Today we probably spend 90 percent of our time trudging through the numbers and 10 percent analyzing and coming up with new strategies for the future,” says Rob on The CFO Show. “If we could possibly achieve a 70/30 ratio, or an 80/20 ratio (a 10 percent increase), that would make a tremendous impact on the quality of decisions that organizations make and their ability to adapt to new data.
We're at that very high end of the transactional scale and just even small incremental improvements would, free up four to five hours from someone's week.”