AML Conference Insights: Key Principles on the Use of AI for Banks


INSIGHT
Published
Jun 23rd '23
Share
Facebook

With technological innovations, banks now have greater access to computing power and vast amounts of data, thanks to cloud computing resources and the availability of structured and unstructured data. This has led to the widespread availability of AI tools and services for banks of all sizes. These tools have proven valuable in enhancing fraud prevention controls, improving AML/CFT monitoring activities, and identifying potential fair lending violations. AI has the potential to enhance overall risk management, compliance monitoring, and internal controls.

 

However, it is crucial for banks to ensure effective governance processes and controls are in place during the planning, implementation, and operation of these innovative solutions. This validation is essential to reap the benefits of AI while mitigating unintended risks.

 

The focus on explainability

The examination of AI applications focuses on several key principles, perhaps the most crucial being explainability. Explainability ensures that the system’s workings can be understood and challenged. Banks must assess the transparency and interpretability of the AI systems under examination.

 

By emphasizing explainability, you can ensure that the decision-making process is clear and comprehensible, enabling regulators and stakeholders to trust the outcomes. Assessing transparency and interpretability helps build trust, meet regulatory expectations, and allows outcomes to be comprehended and challenged by stakeholders.

 

Here’s a close look at the top five insights from compliance examiners, which shed light on the importance of AI explainability, the need for a plain English executive summary, and proactive communication with examiners.

 

  • AI explainability: A lack of explainability has contributed to the limited number of AI applications currently available. Banks must recognize the significance of transparently elucidating how their AI models work and addressing any potential concerns in this regard.
  • A plain English executive summary: Simplicity is key as banks must provide a plain English, executive summary of their AI models. It should not rely on theoretical computations but rather focus on presenting complex concepts in the simplest terms possible. By doing so, banks can ensure that stakeholders at all levels — including the board — comprehend the fundamental workings of their AI systems.
  • Utilizing executive summaries at the board level: The executive summary should serve as a concise yet comprehensive overview of the AI model’s functionality, addressing key components and potential risks. This ensures that decision-makers have a clear understanding of the AI initiatives employed within the institution.
  • High-level summary: Banks must be capable of delivering a high-level summary of their AI models. Failing to provide such a summary raises concerns regarding the bank’s ability to withstand critical challenges to the system. Examiners perceive this as a red flag, indicating potential vulnerabilities or inadequate risk management strategies.
  • Proactive communication: Strengthening the examination process in the spirit of transparency and collaboration, banks are encouraged to proactively engage with examiners before formal examinations. By providing an overview of their AI initiatives and addressing any questions or concerns raised by the examiners, banks can establish a foundation of trust and demonstrate their commitment to compliance and effective risk management.

 

Other key principles discussed by compliance examiners include:

 

  • Data management: A comprehensive understanding of data management practices is crucial in evaluating AI applications. It involves evaluating data sourcing, processing, and maintenance practices throughout the AI lifecycle. Emphasizing effective data governance, including considerations for data quality, privacy, and consent, is vital to comply with regulatory obligations, protect customer information, and ensure data integrity. By evaluating data sourcing, processing, and maintenance practices and complying with regulatory obligations, organizations can safeguard customer information, maintain data integrity, and adhere to privacy and consent requirements.
  • Privacy and security: As AI systems often handle sensitive data, privacy and security must be prioritized in the evaluation process. It is important to assess the measures in place to safeguard data integrity, confidentiality, and protection. This includes evaluating factors such as encryption, access controls, and compliance with privacy regulations to ensure that the AI systems align with regulatory expectations. By prioritizing privacy and security measures, organizations can safeguard data integrity, maintain confidentiality, and meet regulatory requirements in protecting sensitive information.
  • Risk management: Assessing and mitigating risks associated with AI deployment is a key responsibility. It involves implementing a robust risk management framework for AI deployment and evaluating potential risks and their potential impact on compliance and customer protection. This includes establishing governance structures, policies, procedures, and ongoing monitoring to effectively manage and mitigate risks. By implementing a comprehensive risk management framework, organizations can proactively identify and address risks associated with AI, ensuring compliance and protecting customer interests.
  • Compliance monitoring: Establishing a strong compliance monitoring program is essential to ensure ongoing adherence to regulatory requirements. This involves conducting regular audits and assessments to identify any deviations from compliance standards and providing opportunities for corrective actions. By establishing a robust compliance monitoring program, organizations can proactively identify and address compliance issues, maintain regulatory adherence, and take corrective actions when necessary. Regular audits and assessments play a crucial role in ensuring continuous compliance and mitigating any potential risks or non-compliance issues.

 

Additional considerations

If third-party vendors are involved, additional considerations regarding data control and security are necessary. Banks must engage in proper risk management and ongoing oversight when using third-party solutions or collaborating with vendors for AI applications to ensure compliance, consumer protection, and privacy.

 

Governance plays a crucial role in managing risks associated with AI. Banks must demonstrate proper documentation, testing protocols, model management, and vendor management. Ongoing audits are necessary to ensure compliance and effectiveness. Proactive communication with examiners before formal examinations is encouraged to provide an overview of AI initiatives and address any questions or concerns.

 

Risk analysis is a focus in AI applications, allowing for better risk assessment and analysis. However, the lack of explainability in some AI applications poses challenges. Limited applications exist due to these explainability challenges.

 

AI is being used in audit processes, specifically in natural language processing for analyzing large data sets. It helps identify narratives that lack certain elements, enabling targeted sampling and data assessment. AI tools assist in data visualization and assessment, identifying anomalies or patterns that require further investigation.

 

The examination of AI and machine learning applications is often approached from an operational risk standpoint. The Office of Comptroller of the Currency’s (OCC) Comptroller’s Handbook on Model Risk Management is a must read for any bank engaging AI models. This booklet will provide insights into the OCC’s approach to AI examination and help banks prepare their model risk management procedures accordingly.

 

FEATURED BLOG: ChatGPT and Financial Services Compliance: Top 10 Questions

 

Source: Smarsh

 

Author: Tiffany Magri Regulatory Advisor at Smarsh

As a Regulatory Advisor at Smarsh, Tiffany monitors, evaluates and consults on the financial services regulatory landscape. Tiffany has more than 10 years of experience facilitating compliance with laws and regulations, policies, and risk management. Prior to joining Smarsh, Tiffany was a Senior Associate at Benefit Street Partners and a Compliance Analyst at Broadstone and Manning & Napier Advisors.

 

About us

LS Consultancy are experts in Marketing and Compliance, and work with a range of firms to assist with improving their documents, processes and systems to mitigate any risk.

 

We provide a cost-effective and timely bespoke copy advice and copy development services to make sure all your advertising and campaigns are compliant, clear and suitable for their purpose.

 

Our range of innovative solutions can be tailored to suit your unique requirements, no matter whether you’re currently working from home, or are continuing to go into the office. Our services can be deployed individually or combined to form a broader solution to release your energies and focus on your clients.

 

Contact us today for a chat or send us an email to find out how we can support you in meeting your current and future challenges with confidence.

 

Explore our full range today.

 

Need A Regulatory Marketing Compliance Consultant? A Bit More About Us

 

Contact us

 

Why Not Download our FREE Brochures! Click here.

 

Call Us Today on 020 8087 2377 or send us an email.

 

We welcome individual bloggers / Professional Writers / Freelancers to submit high quality contents. Find out more…

 

You can see our Google reviews here.

 

FOLLOW US

Connect with us via social media and drop us a message from there. We’d love to hear from you and discuss how we can help.

 

Facebook  Instagram  LinkedIn  Twitter YouTube

 

Contact us