top of page

Want more great resources?

Subscribe to receive all the latest industry research, trends, and updates directly in your inbox.

Smart Tech, Smarter Compliance: Surviving AI/ML Hype in BSA/AML

  • Writer: Ethan Kratt
    Ethan Kratt
  • Apr 29
  • 3 min read

Let’s talk about the AI/ML buzz—and what’s real versus what’s just noise.


First things first: Machine Learning (ML) isn’t new. It didn’t come riding in on the back of ChatGPT or some flashy fintech marketing deck. 


If you’ve been in the BSA world long enough, chances are you’ve worked with some form of ML already—maybe without realizing it. Think: transaction monitoring systems with behavior-based detection or statistical modeling. That’s ML in action. 


Artificial Intelligence (AI), on the other hand, is a broader concept—essentially, machines mimicking human intelligence. 


ML is a specific subset of AI. Outside tech circles, people often blur the two, using "AI" to describe any tool that learns from data or makes predictions. The confusion between these two phrases can muddy an already complex concept, and that confusion is part of why many folks are hesitant. AI and ML can feel like a black box, and in BSA, black boxes aren’t just risky; they are red flags.


Because here’s the truth: in BSA, it’s not enough for a system to detect suspicious activity—it has to explain why. Regulators don’t ask, “Did you file a SAR?” They ask, “Why did you file a SAR?” And we all know: if it’s not documented, it didn’t happen.


This is where the concept of glass box AI becomes critical. A glass box model is transparent, traceable, and understandable. It provides a clear audit trail from input to decision. A black box model? It might be technically accurate, but if you can’t explain the “why,” it’s useless in a regulatory exam—and possibly dangerous.


So, what does responsible AI/ML usage in BSA look like?


  • Explainability is non-negotiable. You need to be able to demonstrate to your examiner (and your internal team) how the model arrived at its decision. Not just that it worked, but that it worked in a way you can reasonably explain and defend.


  • Data privacy matters. BSA data is confidential. That means if you’re using generative AI—especially third-party models—you need to ensure your data isn’t being used to train future versions of that model. If your SAR narrative ends up on the internet as a sample prompt, you’ve got bigger problems than just your next audit.


  • Use AI where it enhances human work—not replaces it. A great example is using AI to craft more effective SAR narratives. Writing clear, concise narratives is a skill. AI can help refine that, but it shouldn’t be deciding whether to file or what to include, without a human in the loop.


Bottom line: AI and ML can be powerful tools—but they’re just that: tools. Not miracle workers. Not replacements for experienced BSA professionals. And, definitely not a free pass to outsource critical decision-making without accountability.


Before you start adding AI or ML tools into your workflow, keep a few hard rules in mind:


Check with IT and Compliance. Not every shiny tool is safe and approved. If it hasn’t been officially signed off by IT or Compliance, do not use it, regardless of how it appears in the toolbox.


Be smart about what you input. Treat customer information, internal reports, and transaction data like it could end up on the front page of the newspaper. If you don’t want an examiner or regulator reading it publicly, don’t share it with a third-party AI.


Understand your privacy obligations. Financial institutions operate under strict privacy laws, such as the GLBA. Just because an AI tool feels secure doesn’t mean it checks your regulatory boxes. Know exactly where your data is going and who else might see it down the line.


Don’t blindly trust AI with "facts." Yes, AI can summarize regulations or help draft a SAR narrative, but it can also oversimplify or, worse, get it entirely wrong. Always double-check any regulatory interpretation or factual claim before relying on it in your BSA program.


A good rule of thumb? If you wouldn’t feel comfortable explaining it to your examiner with a straight face and a paper trail, it’s not ready for prime time in your BSA program.


The goal isn’t to make decisions faster—it’s to make them smarter, with proof to back it up every step of the way.

Ready to make your BSA program more efficient and effective? We're here to help. Sign up for a demo to learn how you can take the B***S*** out of BSA.


Meet the Author:

Kristin Parker, VP of Compliance & Operations

Kristin is a seasoned expert in BSA/AML and fraud prevention, with extensive experience building and implementing monitoring programs for higher-risk industries. She has contributed to significant updates in the FFIEC manual, provided practical, risk-based AML guidance at industry events, and played a key role in the development of RiskScout’s innovative, actionable BSA solutions. Recognized by her peers as PBC's Compliance Person of the Year, Kristin continues to champion efficient, technology-driven approaches that empower financial institutions to confidently manage regulatory requirements.

Stay Up to Date

Want to keep your finger on the pulse of the latest industry news, trends, and data? 

Enter your email and be the first to know when there's new content added. 

bottom of page