TechNews Pictorial PriceGrabber Video Tue Apr 16 11:57:00 2024

0


Using artificial intelligence to fight financial crime – a legal risk perspectiv
Source: FCA


The Head of the Financial Crime Department at the UK Financial Conduct Authority (the FCA), Rob Gruppetta, gave a speech on "Using artificial intelligence to keep criminal funds out of the financial system" in December 2017.1 In it, he explored how artificial intelligence (AI) could potentially be used to prevent financial crime, and for anti-money laundering (AML) purposes in particular. Although there were sure to be challenges, he concluded, AI had the "capability to greatly amplify the effectiveness of the machine's human counterparts" in this area. This article highlights four potential risks that a firm's legal advisers may want to consider before AI is incorporated into their institution's financial crime or AML processes.

Financial crime and AML innovation

It is estimated that British banks spend GBP5,000,000,000 each year fighting financial crime.2 Much of this is spent on trying to prevent money laundering (not always, history has taught us, successfully: the UK National Crime Agency estimates that hundreds of billions of pounds of criminal money is still laundered through UK banks each year).3 In such circumstances, it comes as no surprise that firms and authorities alike are interested in market innovation that can strengthen AML processes and reduce the cost of compliance.

Take transaction monitoring, for example, which was the main focus of Mr Gruppetta's speech. In this context, AI may help firms reduce the number of 'false positives'. These are the long-standing enemies of AML processes, instances which require investigation but no substantive action. AI machine learning techniques, even at their current stage of development, may be capable of reducing these costly detours into irrelevance by around 20-30 per cent.4 It appears that at least one financial institution has already implemented such a system.5 In light of this, we may expect to see other firms putting in place similar AI systems.

Implementing AI technology

The FCA currently expects to see this AI "technology implemented in a way you would any other – [with] testing, governance and proper management", according to Mr Gruppetta. Clearly, such processes are necessary – but they may not always be sufficient. AI can utilise features, such as unsupervised machine learning algorithms, that are potentially new to the financial crime space. Firms intending to use such AI may need to go beyond simply looking at the usual good practices involved in putting an IT system in place.

In this article we highlight four specific risks of using AI in the context of fighting financial crime and preventing money laundering. This is, of course, not a comprehensive list (nor is it meant to be one). It is, instead, intended as a starting point for thoughts about how firms in the financial sector may want to approach an issue that is likely to give rise to a challenging and complex set of considerations.

AI 'interpretability'

AI decisions may be taken in a way which cannot be easily understood by those who use the software. Mr Gruppetta described this as a potential lack of 'interpretability' of AI.

In a financial crime and AML context, such a potential lack of interpretability may pose a problem. Firms typically need to be able to explain why and how a particular decision was made (for example, what was the basis for a suspicion of money laundering, or lack thereof). This is important for a firm's internal systems and controls and could also be necessary in the context of regulatory enforcement action.

There are, it should be noted, circumstances in which AI decisions or actions may be taken in a way which can in fact be easily understood by those who use the software.6 However, this interpretability may make a system less effective if it necessitates a reduction in the complexity of the technology. This is a point made by the FCA when Mr Gruppetta said that "a firm may need to carefully explore the trade-off between interpretability and performance when choosing what [AI] systems to use".

The FCA has not, however, provided any guidance as to where the line ought to be drawn. Instead, it has been left to firms themselves to consider on a case-by-case basis. This provides a challenge of fundamental importance and regulatory risk in a financial crime and AML context: get it wrong, and the new system may end up causing more problems than it solves.

AI biases

AI systems have the potential to reinforce pre-existing human biases. A machine has no predetermined concepts about right and wrong, only those which are programmed into it. A system that can learn for itself may act in a way unforeseen by its creators, and contrary to their original intentions.7

There are, for instance, examples of AI systems in other contexts showing fewer high-level job openings to female applicants or recommending harsher sentences for ethnic minorities.8 It is not hard to imagine an analogous situation occurring in a financial crime or AML context. For instance, a transaction monitoring system based on an unsupervised machine learning algorithm could potentially make decisions prejudicial to certain names, places or even genders. The repercussions of this could be significant for a firm, both from a financial and reputational point of view.

However, there does not yet appear to be a clear way to limit this risk without detracting from the performance of a system. Pre-programmed rules typically limit the intelligence of a machine, but an overly flexible learning-based approach may pave the way for potential lawsuits or PR headaches. This is, therefore, another area in which firms and their advisers should consider the options available to them and seek to find the right balance before implementing AI into their financial crime or AML processes.

AI third-party software providers

In a financial crime and AML context, firms may need to primarily rely on software provided by third parties where they do not have the expertise necessary to develop their own AI programs in-house. This may create a third-party dependency which needs to be identified and managed from the outset of the contractual process. This may give rise to new clauses to manage risks specific to AI systems, although the legal principles involved, at least for now, will be the traditional ones with which we are familiar (contract, property, tort and so on). For example, the ownership and/or control of new code generated by the AI itself while utilising a customer’s data will need to be dealt with under the contract.

Although firms will have at their disposal the legal tools to effectively manage their own relationships with third-party software providers, their collective actions may nevertheless give rise to systemic risk. This is a point that has been raised by the Financial Stability Board (FSB), an international body that monitors and makes recommendations about the global financial system. The FSB's report on 'Artificial intelligence and machine learning in financial services'9 noted that "banks' vulnerability to systemic shocks may grow if they increasingly depend on similar algorithms". It follows that if systemically important financial institutions begin to rely on similar AI software and services from third-party providers and there was to be a systemic shock, such as a popular third-party provider failing or a virus affecting a widely used AI program, then there could be the risk of widespread financial crime and AML systems and controls failures across the financial sector. This is unlikely to escape the notice of authorities with responsibility for regulating systemic risk.

AI and data protection

Using AI in a financial crime and AML context will typically involve the retention and automated processing of vast amounts of personal data, some of which may be sensitive. This will need to be done in accordance with data protection laws.

In particular, the EU General Data Protection Regulation (GDPR), in force from 25 May this year across all Member States, contains provisions dealing specifically with automated decision making. Article 22 GDPR has specific rules to protect an individual where an entity is carrying out fully automated decision making that has a legal or similarly significant effect on that individual. Certain potential automated AML decisions, such as the freezing of assets, may fall within this category and so would be caught by Article 22.

It is worth noting that it is not clear whether Article 22 prohibits fully automated decision making, subject to certain exceptions; or whether it provides an individual with the right to object to fully automated decision making, again, subject to certain exceptions. The Article 29 Working Party (the group of EU data protection authorities charged with agreeing European-wide guidance on GDPR) guidance currently supports the former interpretation, although this guidance was subject to a consultation, with the final version due to be issued later this month.

The exceptions in Article 22 are:

        where the automated decision making is necessary for performance or entry into a contract;
        where the automated decision making is expressly authorised by EU or Member State law, or
        the data subject has given explicit consent.

The exceptions are more limited where certain special categories of data are processed (e.g. race, ethnicity, religion, health). If any data in these special categories is used by AI in the financial crime or AML context, the only exceptions are explicit consent or where the processing is "necessary for reasons of substantial public interest" and is "proportionate to the aim pursued".

A careful analysis of which exception(s) apply will be required, typically involving a data protection impact assessment (DPIA), as these exceptions are likely to be interpreted very narrowly.

Even where an exception applies, the GDPR requires certain safeguards including:

        transparency – a data controller needs to be able to describe to a data subject the existence of any automated decision making, meaningful information about the logic involved and the significance and envisaged consequences of the processing for the data subject;10 and
        a right to obtain human intervention and challenge a decision – this is a further layer of protection and the controller must provide a simple way to exercise these rights.

Firms should consider their approach to algorithmic ‘interpretability’ (above) with this in mind.

The GDPR concept of ‘privacy by design’ will also require the design of any new technologies, including new AI systems to be used in a financial crime and AML context, to factor in privacy considerations at the outset. For more information on the GDPR please see a guide by Allen & Overy data protection experts.

A global perspective

Financial crime and money laundering typically have an international dimension. An AI system designed to prevent it needs to be global in scope. This, of course, will give rise to cross-jurisdictional challenges. For example, as noted above, AI systems are typically intensive users of data. Although the GDPR will provide a level of harmonisation within EU member states, it is not yet clear how legal questions of data protection will be addressed at a transnational level to enable firms to create AI systems that are fit for purpose in a global economy.

It is also, perhaps, worth noting that this article has been written from the perspective of a UK lawyer because the FCA has made tackling financial crime and money laundering a priority, and has been active in promoting innovation in this space.11 But the concepts it explores are global in nature. And it will not be long, we expect, before the agenda set by Mr Gruppetta and the FCA becomes a more widespread topic of focus.

Conclusion

It is clear that AI technology needs to be approached with care. As with any innovation, there are potential risks as well as benefits. We have considered here how AI software may not always make decisions that are interpretable by humans, how it has the potential to reinforce biases and act contrary to the intentions of its programmers, how individual firms' relationships with third-party providers could give rise to systemic risks and how any use of AI to tackle financial crime will need to be in accordance with data protection laws. These risks may be just the tip of the ice-berg too.

Having said that, it seems likely that disruptive technological innovation will affect financial crime and AML processes in the near future. AI may, and most probably will, be used to overhaul existing systems and both better prevent money laundering and reduce the cost of compliance for firms. So long as the risks are identified early and subsequently carefully managed, AI has, as the FCA's Rob Gruppetta pointed out, the potential to "keep criminal funds out of the financial system". This, we are sure everyone reading this will agree, can only be a good thing.

[1] FCA, https://www.fca.org.uk/node/46631.

[2] FCA, https://www.fca.org.uk/node/46631.

[3] National Crime Agency, http://www.nationalcrimeagency.gov.uk/crime-threats/money-laundering.

[4] McKinsey, https://www.mckinsey.com/business-functions/risk/our-insights/the-new-frontier-in-anti-money-laundering.

[5] Ayasdi, https://www.ayasdi.com/solutions/anti-money-laundering/.

[6] Intel, https://newsroom.intel.com/news-releases/intel-launches-intel-saffron-aml-advisor-using-ai-detect-financial-crime/

[7] At the World Economic Forum on 25 January 2018, the UK Prime Minister, Theresa May, announced that exploring the ethical implications of AI should be a priority for industry and government.

[8] FSB, Artificial intelligence and machine learning in financial services, Annex B:    http://www.fsb.org/2017/11/artificial-intelligence-and-machine-learning-in-financial-service/.

[9] This report was also referenced by Mr Gruppetta in his speech.

[10] The Article 29 Working Party Guidance states "the controller should find simple ways to tell the data subject about the rationale behind, or the criteria relied on, in reaching the decision without necessarily always attempting a complex explanation of the algorithms used or the full algorithm. The information provided should however be meaningful to the data subject".

[11]FCA,    https://www.fca.org.uk/publication/business-plans/business-plan-2017-18.pdf.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |