What does the future regulation of AI mean for banks?

//

Artificial intelligence (AI) is no longer something banks can ignore. From anti-money laundering solutions to chatbots and creditworthiness assessment tools, AI  — and machine learning in particular — is becoming an established essential component of data analytics innovation.

Increasingly, AI use is becoming embedded in many of the key functions of financial services providers, causing regulators across the globe to take notice and move towards a more standardised approach to managing AI risk.

The European Commission has been at the forefront of this and recently published draft regulation intended to govern the use of AI across the EU. However, this is not only relevant to the EU: it will likely have implications on the approach taken in other jurisdictions.

Similar to the way the EU’s General Data Protection Regulation shaped and influenced laws in many other places, it is expected that the EU’s AI regulation will also lead to significant change. This includes the UK, where the Treasury and the Financial Conduct Authority (FCA) will need to decide the extent to which they will choose to replicate the EU’s approach, or diverge from it.

While the draft legislation is not intended to be sector-specific, it makes specific references to the use of AI in financial services. AI systems used to evaluate creditworthiness and establish credit scores are listed as ‘high-risk AI’.

Regulatory scope

Systems which are not exclusive to financial services, but relevant to the day-to-day conduct of business, are also within the scope of the regulation and the high-risk classification. These include systems that are used for recruitment purposes, including advertising vacancies and screening applications, and those used to make decisions on the promotion or termination of employment. Even systems that enable automatic task allocation, and the monitoring or evaluation of the performance and behaviour of employees, may need to be classified as high-risk.

Other AI systems may also be designated as high-risk in the future. The draft regulation provides a process through which the European Commission can issue delegated legislation which deems other systems to be high risk on the basis of a broad criteria. Systems used by banks, not currently referenced in the draft regulation, could therefore be brought within its scope in the future. 

AI use is becoming embedded in many key functions of financial services providers, causing regulators across the globe to take notice

The draft regulation sets out prescriptive requirements that will need to be complied with before using high-risk AI. This includes specific controls around risk management, data quality and accuracy, and the need for technical documentation to enable traceability of decisions. Technical documentation should provide evidence that automatic logging of events is possible while AI systems are in use.

High-risk systems must comply with human oversight requirements aimed at preventing or minimising risks to fundamental rights. Human oversight processes should be sufficient to enable the capacities and limitations of AI systems to be fully understood. 

These processes should also ensure that the system can be disabled where necessary. The draft regulation requires the human providing oversight to be able to intervene on the operation of the high-risk AI system, or interrupt it through the use of a ‘stop’ button or similar procedure.

Unintended bias risks

Banks must also consider that AI systems must be transparent — they will need to be designed in a way that enables users to interpret their outputs. They should also ensure that an awareness of a possible tendency of automatically relying, or over-relying, on the output produced by a high-risk AI system is kept in check.

Unintended bias consequences and unfair discriminatory outcomes are also addressed. The draft regulation requires training, validation and testing data to be subject to appropriate data governance and management practices that take into account potential bias.

Consideration is also to be given to how AI systems learn. Processes may need to be put in place to ensure that outputs are not used as inputs for future operations in ways that could lead to potential discrimination or other poor outcomes.

Some AI practices are prohibited altogether. Most of these will not be relevant to banks, as they are more focussed on AI use by public authorities. However, there are some which are worded vaguely and could potentially catch certain promotions or advertising practices. Banks will want to engage with the legislative process to ensure that these uncertainties are addressed before the law comes into force.

As the use of AI technology deepens across the sector, it is likely that we will see more guidance and regulation proposed to help mitigate against the challenges which this draft regulation sets out to address. As these proposals will impact both current business processes and future innovations, banks must prepare now to engage with the issues, and watch closely the steps that the FCA and other regulatory bodies take to address AI risk.

Luke Scanlon is head of fintech propositions at law firm Pinsent Masons.

Source link

Leave a Comment