top of page

Want more great resources?

Subscribe to receive all the latest industry research, trends, and updates directly in your inbox.

How ChatGPT Can - and Can’t - Assist Your Compliance Team

“ChatGPT can assist the compliance function at banks and credit unions by providing automated responses and insights to common compliance-related inquiries and issues.”


That was ChatGPT’s response when asked to write a single sentence about how it can assist banks and credit unions with compliance.


Grammatical, yes.


Helpful? Not particularly.


As any bank compliance professional knows, providing responses to common compliance questions is easy. What takes time is resolving more complex or nuanced dilemmas and this is something that ChatGPT seems to acknowledge is beyond its ken.


What Is ChatGPT?


Created by OpenAI, ChatGPT is an artificial intelligence (AI) software algorithm that accesses all Internet data prior to the end of 2021 to generate the best possible (most probable) response based on that data set. In essence, ChatGPT has the ability to crowdsource responses to questions using the collective wisdom of the Internet.


GPT-4, a system available to paid subscribers, is even more capable than the free version. Released on March 14th, GPT-4 could pass the bar exam scoring at nearly the 90th percentile of all test takers, according to recent reports.


No question, generative AI technology is dazzling and has enormous potential. The global generative AI market is expected to reach $109.37 billion by 2030, according to a December 2022 report by Grand View Research. The report suggests that the market will expand at a CAGR of 35.6 percent from now until 2030.


What AI Can Do in Compliance Today


Following are some areas where AI really shines when helping compliance professionals:

  • Producing first drafts for compliance manuals and other straightforward communications tasks. Getting past writer’s block is one of the areas where ChatGPT excels. Using ChatGPT to get a rough draft of a written product on the page, a skilled compliance professional can correct any errors in the AI-generated materials and then add nuance or context.

  • Raising red flags. When it comes to BSA/AML (Bank Secrecy Act and Anti-Money Laundering) compliance, AI tools are skilled at analysis and can identify possible violations. These red flags can then be investigated by a human compliance officer to determine which possible violations do, in fact, signal worrisome behaviors.

  • Acting as a type of SparkNotes for professional articles and lengthy presentations. “One of the most powerful abilities of A.I. language models is quickly summarizing large amounts of text,” according to a March 30, 2023 article in the New York Times. This article suggests that AI programs are particularly well suited to summarizing dense academic papers, or even condensing podcasts or other presentations into bullet points. It’s even possible, the article says, to “get a concise summary of a work meeting while you’re still in the meeting.”

While ChatGPT is the most famous of the AI chatbots, different AI algorithms might be useful in scratching time-consuming tasks off compliance professionals’ to-do lists.


Bing, for instance, can generate images or illustrations in a specified style, SlidesAI can create presentation slides within seconds, and Legal Robot has a “legal simplifier” that quickly translates legalese into plain language.


What AI Tools Can’t Do


When asked for an example of what ChatGPT should not do in bank compliance, here’s what the AI chatbot said:


“ChatGPT should not be used to generate false or misleading information to regulators or customers, or to provide advice or guidance that is not in line with regulatory requirements or ethical principles.”


This answer is technically accurate but is not the type of response a self-respecting human being would offer.


Here, the AI tool immediately assumed an extreme position and gave an answer that was obvious and yet also clueless. Of course, the tool should not be used to commit a crime, but what about some of the more nuanced requests that it might be poorly suited to fulfilling?


Given what’s known so far, here are some things to avoid when using ChatGPT:

  • Replacing compliance professionals. AI tools are prone to error, as many of the articles about the latest technology make clear. While these tools can assist compliance professionals, they are in no way ready to take the place of even the most junior individual on your compliance staff.

  • Working with information that’s private or sensitive. When signing into ChatGPT, users are asked not to share anything confidential with the model. There’s a great deal of speculation about how information put into ChatGPT is actually being used. Does it become part of the model? And if it does, in fact, get absorbed into a giant repository of possible responses, how might confidential business or customer information be shared with others and prompt major problems down the road?

In the end, it’s important to remember that these are very early days for ChatGPT and other AI tools. Experimenting with these tools is important because AI looks to be a growing part of the business landscape. On the other hand, compliance teams need to proceed with extreme caution until the advantages and disadvantages of AI chatbots are better understood.

 

Looking to take some of your compliance workload off of your hands? Whether its reviewing alerts, SAR lookbacks, or just having an extra set of hands, our team of expert advisors are here to help.

Comments


Commenting has been turned off.
Riscout Mountain Range - 4.png

Stay Up to Date

Want to keep your finger on the pulse of the latest industry news, trends, and data? 

Enter your email and be the first to know when there's new content added. 

bottom of page