The pandemic has kick-started digital transformation for the banking industry worldwide and it has led to the growth of the virtual banking market and integration of artificial intelligence (AI) technologies in the financial sector to streamline the workflows and better serve a broader set of customers.
A study has shown that 52% of all financial service industry is deeply investing in AI. In Asia, a recent study conducted by the Hong Kong Monetary Authority (HKMA) states that almost 90% of all banks are planning to integrate AI to provide better onboarding and transactional services.
It’s no doubt that AI offers great efficiencies and better customer experiences in a wide spectrum of areas, but there are still some challenges that organizations have to tackle while integrating AI into the workflow. A few challenges faced by banks often include the lack of credible and quality data, unavailability of effective AI solutions, and a lack of transparency and accountability of the delivered outcome.
Regulatory bodies are trying their best to develop and implement a framework to enhance accountability, explainability, and auditability of the AI applications used within the FIs to mitigate the risks.
The HKMA issued new guidance to the banking industry on the development and the use of AI applications. The AI guidelines are set out for the banking industry in Hong Kong, the principles, in general, could also work for financial institutions in other countries. The HKMA guidance listed 12 main principles around 3 themes of machine learning.
3 Primary Theme of Machine Learning
Senior-level management of banks and financial institutions have to be accountable for all AI-driven decisions. It is the responsibility of board members and senior managers to build a proper governance framework and build risk management measures to ensure that AI applications are operating as well as they should.
- Application Design & Development
Banks should integrate measures and build sufficient audits while designing the phase of AI applications to track and ensure a sufficient level of explainability for the outcomes of AI applications. To help achieve this, banks should ensure that their developers have the required competence and experience, not only to develop the models but also to understand the synergy between algorithms and regulatory compliance.
Another thing that has to be kept in mind is the quality and reliability of data used for AI models. The quality of data used to train the AI applications makes all the difference between a successful and inefficient process. Banks should utilize the most effective data governance framework to ensure that data used are of good quality and there is no bias built into the AI-driven decisions. If left undiscovered, the bias in the data sets could affect the accuracy of the decision-making process of AI solutions. Bias and inaccuracy in AI programs can lead to a rise in the number of false positives.
- Continuous Monitoring and Maintenance
AI applications keep evolving from live data and their model behavior may therefore change after deployment. Timely reviews and re-validation of AI applications and any related services provided by third-party vendors are vital to ensure the accuracy and appropriateness of the AI models.
As all the AI applications rely on quality data, the exposure of these data to new cybersecurity threats, banks should also ensure compliance with data protection regulations across jurisdictions that operate. All PII should be properly encrypted in transit at rest.
There are several areas where financial services are utilizing AI to streamline the processes. AI solutions are being integrated into identity proofing technologies for fraud detection, AML and KYC compliance, and risk scoring.
AI has allowed financial services to utilize huge data sets and machine learning optimally to automate the ID verification process and deliver a more streamlined customer onboarding process. Based on the current landscape of AI solutions, banks have to be careful before developing a complete AI-based customer verification solution. As AI ID verification solutions rely on the data fed to them, they can be easily manipulated and fail to accurately verify the digital ID of a user. The inability to distinguish between a bad actor and a legitimate customer can increase the potential risks for financial fraud.
In situations where there is no way to use quality data, then using a hybrid approach to online identity verification that leverages informed AI and human efforts. There’s no doubt AI-based solutions can enhance operational efficiency and perform identity verification at an incredibly high level of accuracy, but it is not a solution that can completely replace humans. AI utilizes machine learning that makes decisions based on probabilities. Humans’ verifiers can provide feedback to the algorithms based on their knowledge of which major outcomes were false positives, or false negatives to test the efficiency of AI models.
Some of the major limitations of fully automated (purely-AI) based solutions occur because of environmental noise. This noise is a blurred image, dim and poor lighting, excessive glare, which makes it tough for AI solutions to efficiently verify customer ID and other documents. The only reason to include humans into the customer verification process instead of leaving everything on AI is that human agents can specify document rejection reasons at a more granular level. Humans can verify the ID document photos which were too blurry or a finger was obscuring parts of the document.
The accuracy and the effectiveness of the data that’s fed to AI algorithms also influence machine-based decisions. Many companies and vendors will tend to use old and outdated data to train their AI algorithms while building up their eKYC solutions. One of the biggest problems with this approach is that the data isn’t based on real-world applications and in many cases, the IDs have been improperly tagged which introduces automatic bias into any AI models derived from these data. Plus, these data are often too small to effectively train AI solutions. An AI-based solution will have to verify millions of documents, so using a data set with just 100 documents isn’t the ideal way to train these models.
Why do AI-Based Verification Models Fail?
All the ID documents are physical that have to go through wear and tear, and it may feature manufacturing inaccuracies. Many factors can lead to even the best algorithms providing a high number of false positives. As mentioned above, environmental noise could make a fraudulent ID being marked as real or a real ID being marked as fake. Or the images on IDs may be too old to pass automated inspection.
The reason AI-based verification models fail is that there is no limit for the real-time data that they experience while verifying real customers. This is why AI solutions should be combined with a human expert to ensure the reduction of false positives and negatives.
Using Solutions other Than AI for eKYC
AI is a slowly growing industry, before it finally reaches its pinnacle, businesses can use other solutions to comply with KYC solutions. DIRO online document verification software provides instantaneous online document verification.
DIRO allows banks, payment providers, merchants, crypto wallets, and financial institutions to verify documents like bank account documents, proof of address documents, student records, income tax documents, and others. DIRO allows clients to log into any government or private portal, without using any credentials thus securing the customer verification process. The solution also verifies online customer documents by cross-referencing document data from issuing sources thus eliminating the use of fake and stolen documents by 100%.