<img height="1" width="1" style="display:none;" alt="" src="https://www.facebook.com/tr?id=880824774066981&amp;ev=PageView&amp;noscript=1">
Skip to Content
Vena Solutions
Vena Solutions
Main Content
Blog Home > Data and Tech > Should Finance Teams Use Generative AI? How To Implement AI Securely

Should Finance Teams Use Generative AI? How To Implement AI Securely

Table of Contents

What would you do if you had a robo sidekick that could accompany you in every aspect of your job? With recent advances in AI, this future isn’t so far away. 

“In the realm of finance, AI stands to be a game changer,” says Harjot Ghai, Chief Operating Officer at Delbridge Solutions. “It can streamline financial processes by automating data analysis, optimizing resource allocation and improving forecasting accuracy.”  

However, AI also has inherent risks—flying directly in the face of finance teams’ mandate to minimize their company’s exposure to risk.  

Misinterpreting AI's advice or relying too much on it to inform strategy without considering factors like market volatility and compliance risks, can lead to substantial losses.  
 
In this article, we’ll explore the risks finance teams need to consider around generative AI and how to use these tools in a secure and compliant way.  

The Promise and Risks of Generative AI in Finance 

In 2023 alone, generative AI for finance has grown by leaps and bounds. Here are some notable use cases: 

  • Automated financial reporting: Bloomberg's "Cyborg" technology is capable of generating complex financial reports by analyzing company earnings data.
  • Financial forecasting and planning: Investment firms like Betterment and hedge funds use AI to forecast market trends and make informed investment decisions. Robo-advisors, which provide automated, algorithm-driven financial planning services with minimal human supervision, are becoming increasingly popular among individual investors.
  • Contract drafting and compliance: JPMorgan Chase implemented an AI program named COIN (Contract Intelligence) to interpret commercial loan agreements.
  • Risk management and fraud detection: Mastercard employs AI algorithms to detect fraudulent transaction patterns in real-time. 

On a closer look, you’ll realize there’s one thing each of these use cases and companies have in common: trademark tools and technologies. These companies are actively investing in bespoke, cutting-edge technologies to gain a competitive edge and address specific challenges in their sector.  

The move towards in-house technologies reflects the unique complexities that make using generative AI within finance trickier than other areas of business. For instance: 

  • Data freshness: Consumer-grade AI tools don't always have access to up-to-date data (like new updates to accounting standards), which is crucial for ensuring compliance.
  • Higher risk of errors: Using generic AI without oversight can lead to inaccuracies, causing financial and reputational harm.
  • Regulation vs. innovation mismatch: Generic AI struggles to balance innovation with strict financial regulations. 

It's because of these nuances that finance teams must be extra cautious about how they use generative AI, to avoid the worst of potential consequences. 

Case Studies: The Pitfalls of Using AI  

Example 1: iTutorGroup’s Discriminatory Hiring  

In August 2023, iTutorGroup had to pay out $365,000 in damages for discriminating against older employees while hiring. The bias came in through the AI hiring software they were using—without sufficient manual screening, the software only recommended young, male applicants. 

Financial institutions face even higher risk around AI tools that review large pools of applicant data, as it can mean discriminating against loan applicants, causing regulatory breaches in fair lending laws and inviting reputational damage if such biases become public.  

Example 2: Zillow Laid Off 25% of Workforce Due to Algorithmic Inaccuracy 

In November 2021, Zillow geared up to wind down its Zillow Offers (a home-flipping enterprise) and lay off 2,000 employees based on inaccurate home prices produced by a home price-estimating algorithm. 

The machine learning tool would take into account data about the property gathered from sources including tax and property records, homeowner-submitted details such as the addition of a bathroom or bedroom, and pictures of the house. It would then provide a ‘Zestimate’ on how much a house would sell for. This service, targeted at home-flippers, ended up grossly inflating house prices.  

As a result, Zillow had to do a $304 million inventory write-down in Q3 2021, leading to mass layoffs and the eventual closure of its home buying business.  

This is a warning sign for businesses in the asset evaluation space—and any company that intends on using AI to inform big business bets, really. Similar algorithmic estimates that stem from incomplete data sets or lack of understanding of market volatility could lead to significant financial misjudgments. 

Understanding the Risks of Generative AI for Finance 

Navigating generative AI tools in finance involves striking a balance between its innovative capabilities and inherent risks. 

It's essential for finance teams to be aware of these challenges as they dip their toes in the ocean of AI: 

  • Data Quality and Bias: AI systems are only as good as the data they're trained on. Biased, inaccurate or incomplete data can lead to skewed results. In the same breath, generic AI tools (like ChatGPT) shouldn’t be fed proprietary data for fear of violating privacy concerns. This does you no good, as that’s the kind of data that would enable AI models to produce higher quality outputs. 
     
  • Regulatory Compliance Issues: Generative AI models aren’t always trained on the most recent data (for instance, unless you’re using a premium version of ChatGPT, the system is only trained on data up to September of 2021). This could cause issues if using AI around any kind of financial reporting, as it may not recognize the most recent updates to standard requirements like ASC and IFRS.  

  • Lack of Transparency (Black Box Issue): AI decisions can be a "black box," making it difficult to understand how conclusions are reached. This lack of transparency can be problematic in finance, where stakeholders need clarity. ​​ 

  • Vulnerability to Cybersecurity Threats: AI systems can also be susceptible to cyber-attacks. ChatGPT in particular, experienced a bug that exposed user payment information, email information, and credit card details, leaving users vulnerable to attacks from cyber criminals.  

  • Operational and Market Risk: AI can misinterpret market conditions, leading to operational risks. For example, in the Zillow case study above.  

Integrating Generative AI in Finance 

Despite the risks we outlined above, it would be foolish to reject AI’s advantages. With the potential to save hours of work on tasks like data formatting, modeling and interpretation, the business impact of AI is too strong to ignore. 

To take advantage of AI today and five years from now, you need to start strategically building processes for assessing and deploying generative AI tools.  

As you research AI systems to integrate into your finance function, here are four questions you should be asking: 
 

Question   What It Evaluates

What measures are in place for data security and privacy? (For example, encryption technologies) 

The security of your company's internal data or the data of your customers

What is the system’s decision-making process? 

This is crucial to avoid the Black Box Conundrum where AI tools use circular logic and reach preposterous conclusions

How recent is the data the AI model is pulling from? 

If the outputs you need depend on conditions that change often, this will be an important factor in whether you can trust the tool 

What is the vendor’s track record and experience in the finance sector? 

Industry-specific experience

 

You’ll also need to be mindful of how of how these AI tools would integrate into your operations and tech stack and continuously monitor them to ensure you can trust the output you’re getting. 

Here’s VP of Business Solutions at Marcum Technology Rob Drover’s advice on how to do this, as he tells Vena CFO Melissa Howatson on an episode of The CFO Show podcast:  
 


If you use a machine learning model to make predictions, test that model in real life conditions to ensure that the predictions align. Because if the model is coded incorrectly, it’ll skew the predictions.  

This is especially sensitive for businesses that do underwriting for mortgages, insurance, or other qualifications. You want to make sure the model is built carefully and you understand how it’s trained so it makes valid predictions and isn’t biased towards certain populations or income levels. (Like in the case of iTutor above). 

Most importantly, understand how your AI model is built and how it reacts.  

Consumer-grade AI tools like ChatGPT work on a system where you input information and get an output. What you need to ask is “where does that information go when I enter it?” 

For example, what happens if an employee takes a client’s tax return and uploads it? Where does that information go? 

Is that confidential information going to show up in someone else’s search next time they query the model? 

Understanding where generative AI falls short is key to incorporating it mindfully into your tech stack. A few enterprise AI tools like Microsoft Copilot have ways of addressing the pitfalls we’ve outlined here.  

The Advantage of Microsoft Copilot for Finance 

Microsoft recently released Copilot, an enterprise-ready AI sidekick. It integrates with the Microsoft 365 suite including Excel, which for finance teams is particularly exciting as it offers the following features: 

  • Sidecar (Copilot’s main chat interface) to provide conversational support to the user as they navigate through Microsoft apps
  • Data analysis built into Excel
  • Data visualization features
  • Financial modeling and what-if analysis 

Copilot adheres to Microsoft’s AI principles, ensuring compliance with government regulations and strict data security measures, including: 

  • Data Control and Security: Copilot is powered by Azure OpenAI Service and ensures customer data privacy and security within the Azure cloud, with no third-party data sharing without consent.
  • Encryption and Privacy: Customer content in Copilot is encrypted both at rest and in transit using technologies like BitLocker and TLS.
  • Data Residency and Compliance: For EU users, Copilot adheres to the EU Data Boundary.
  • Tenant and Group Data Protection: Copilot's permissions model prevents data leakage, showing users only the data they are authorized to access, maintaining strict tenant data security. 

Even with guardrails in place, the output from generative AI isn’t flawless. It needs human oversight.

In a research blog about GPT-4 (which fuels Copilot's LLM), OpenAI states, "despite its capabilities, GPT-4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it "hallucinates" facts and makes reasoning errors)." 

Copilot, however, minimizes the likelihood of these errors by grounding prompts with additional business context from your Microsoft 365 apps. 

Develop Faster Insights from Your Data With Vena 

With Vena Insights, you can get more from your data that’s already living in Vena through embedded Power BI and Microsoft’s best-in-class AI and machine learning technology. 

You can take advantage of built-in AI features such as: 

  • Natural Language Processing: Use natural language prompts to inquire about your data.
  • Predictive Analytics: Make forward-looking predictions based on your actuals with the help of machine learning. Control which inputs you want to factor in your forecast.
  • Anomaly Detection: Detect unusual patterns in your data and investigate them with possible explanations supplied by AI. 

To get a closer look at each of these features, check out our blog, Advanced Power BI Features in Vena: 6 AI Tools You Need To Try.  

Final Thoughts 

In our view, finance professionals have a lot to gain from incorporating AI — enhanced speed, improved analytics and advanced risk assessment capabilities. But, only after thorough evaluation, gaining a deep understanding of the model’s decision-making processes and establishing clear guidelines on how these kinds of tools should be used. 

Given the awe-inducing speed with which AI has grown in sophistication, getting this balance of innovation and caution right is something that should be on every finance team’s mind.  

“Today we probably spend 90 percent of our time trudging through the numbers and 10 percent analyzing and coming up with new strategies for the future,” says Rob on The CFO Show. “If we could possibly achieve a 70/30 ratio, or an 80/20 ratio (a 10 percent increase), that would make a tremendous impact on the quality of decisions that organizations make and their ability to adapt to new data.   

We're at that very high end of the transactional scale and just even small incremental improvements would, free up four to five hours from someone's week.”

 

Illustration of AI brain in centre of the frame, with bar charts going up in the left background, and bar charts going down in the right background.
The Definitive Guide to AI in FP&A: Benefits, Use Cases and Risks Explained
Learn more about the use cases of AI within the office of finance, and how finance and business leaders are currently thinking about AI.
Read the Blog

 

envelope-lightbulb-icon

Like this content?

Get resources curated just for you and your department.

Learn More

Read More