This article in TechRadar identifies many of the challenges associated with identifying money laundering activity; it is far more complex than most non-practitioners are aware of.
The article makes it clear, however, that criminals have the advantage when each FI must attempt to detect money laundering without a network view of transactions and without investigators providing the results of the alerts sent to be used to further refine the AI models.
This appears to be a perfect problem to solve with Distributed AI. Distributed AI enables every FI to operate a local model that was developed and deployed centrally, in this case by the government. The local model uses local data to update the model. At regular intervals the local model is sent back to re-train the central model, which is accomplished without needing to divulge any local data.
The problem preventing an effective solution today begins with the lack of feedback to update the AI models (bold is mine):
“In most countries, the regulatory requirements make it difficult to track the success of anti-money laundering (AML) projects, however. Banks are tasked with identifying and investigating potentially fraudulent activity, and disclosing it to the authorities as appropriate. However, there are only two countries worldwide where the authorities will come back and tell the bank what happened – whether they were right. That being the case, how can banks push for greater accuracy in their AML projects when they don’t see the results?”
There is also the need for FIs to prove that the AI model in use is appropriate. That problem is solved if the regulators control the model:
“It’s also essential that banks can demonstrate to regulators why transactions are flagged up in the way they are. How does their segmentation work? Do they use a predictive model, and if so, how do they tune their detection? Institutions have to be able to prove that their decisions are not influenced by unconscious (or conscious) bias. Having a bespoke algorithm in place will give banks the tools they need to clearly lay out why certain actions have been flagged as suspicious.”
Not described directly in the article is the limitation imposed by having only local data. As with card fraud, having an AI model that is trained on data collected from both sides of the network (merchant & FI), as well as including data from as many merchants as possible, can make the model far more accurate.
This approach can immunize all of the endpoints shortly after the criminal activity has been properly identified and characterized.
So looking at the scale of the problem:
“It’s estimated that the global money laundering business is worth somewhere in the region of $2000bn, of which only around 0.2% is detected.”
It seems clear that there is a strong case for regulators to control the monitoring for money laundering centrally, while deploying the AI models to execute directly in the FI environment. This would enable the model to use local data that is never shared, yet flag suspicious local activity.
That local model is then shared centrally where it can be pruned to ignore false positives and enhanced so it detects the actual criminal activity and then sent back to the FI to be executed locally.
Overview by Tim Sloane, VP, Payments Innovation at Mercator Advisory Group