The recently concluded Artificial Intelligence (AI) Summit in Paris reiterated the need for creating safe and trustworthy AI systems. The theme of trustworthy AI is indeed the most pressing policy concern across sectors, including the financial sector.
Recently, the Reserve Bank of India (RBI) commissioned a group of experts to steer the adoption of responsible AI with the aim of ensuring that the AI systems driving the financial sector are safe and trustworthy. The committee is tasked with developing a Framework for Responsible and Ethical Enablement of Artificial Intelligence (FREE AI Committee).
The RBI is motivated to advance the gains that accrue from AI deployment such as automation, efficiency and cost-savings while addressing the attendant risks. This piece discusses three policy considerations that may help in shaping a confident and trustworthy AI ecosystem in the financial sector.
AI ‘as a whole’
First, there is merit in looking at ‘AI as a whole’ when devising a regulatory and supervisory approach.
A ‘whole of AI’ approach to regulation would mean that the RBI has a full view of all the actors — many unregulated — and their roles in an AI system’s lifecycle. Banks and NBFCs interact with unregulated actors such as data vendors for sourcing external data and third-party model developers for building proprietary algorithms or for customizing AI models. The unregulated actors tend to provide technological expertise that does not qualify as a financial function yet has grave implications for core financial-decision making.
For instance, a credit scoring algorithm could consistently treat individuals with similar credit risks differently. To manage this risk and apportion liability, it is imperative to identify the actor responsible for the error in the model’s outcome.
For instance, it could either be the case that the foundational model had inbuilt bias or that the model picked up bias when it was being retrained. In the former case, the third-party vendor that designed the model would be liable while in the latter, the financial institution that deployed and retrained it would be responsible.
Understanding the constituents of the AI value chain can help regulators devise proportionate and effective mitigants.
In the example above, regulators could share model contracts containing the safeguards that regulated entities must emphasise in their contracts with third-parties. These model contracts also benefit third-parties who clearly understand their roles and responsibilities, ultimately fostering innovation in the industry.
Moreover, such clarity upfront, bolstered by contractual safeguards, also reduces the probability of occurrence of customer harms and legal violations.
Second, it is imperative to address concerns arising from financial institutions’ growing dependence on non-financial entities like AI vendors.
The rise of digital financial services has raised the question of management of non-financial entities. The Digital Lending Guidelines of the RBI have allayed this concern to a great extent. It appears that the current definition of Lending Service Providers (LSPs) can accommodate third-party AI providers. Further, the tools contemplated for LSPs also appear amenable to these providers.
For instance, requiring a public registry of AI vendors can guard against concentration risks that arises from too many financial institutions relying on the same vendor for their underlying models.
Next, maintaining a blacklist of AI vendors that is updated regularly based on feedback from the industry can serve as an accreditation tool. We understand that even bigger financial institutions face challenges in identifying credible AI vendors to partner with.
Finally, AI vendors could also be encouraged to join the lending SRO. This move would help alleviate anxieties about the credibility and quality of AI vendors. Further, the SRO could become the go-between the regulator and the vendors, encouraging delegated regulation.
Lastly, it is an opportune time to operationalise ‘Responsible and Trustworthy AI’ for all actors in the digital lending value chain.
While significant work has been done to conceptualise “Responsible and Trustworthy AI”, much of the discourse has leaned towards broad, sector-agnostic, regulatory principles without unpacking what it would mean in the unique context of digital lenders in India.
The RBI routinely reminds digital lenders to consider fairness, prevent biases, safeguard privacy and ensure data robustness in algorithm-based decision-making.
Guidance needed
Yet, there remains a vacuum when it comes to guidance on the actions that could help lenders move in that direction.
With the FREE-AI Committee’s work serving as a starting point for AI regulation in digital lending, establishing principle-level guidance on operationalising Responsible AI would induce clarity and confidence in an upcoming and yet-to-mature industry. This would provide lenders and AI vendors with greater clarity on the considerations to account for when designing, developing and deploying AI.
Going a step further, these entities would want to take stock of their current practices and assess how far they are from widely accepted Responsible AI standards. For those just starting their AI journey, many would be waiting on the fence for best practices to guide them in integrating Responsible AI into their operations. Thus, translating the principle-level guidance into assessment tools and practices that the industry can adopt to evaluate its AI-based solutions would be a timely contribution from the Committee.
The journey of Responsible AI in finance has only just started. Equipping the industry with easy to use tools such as checklists that first allow them to take stock of their maturity. Further, complementing it with operational guidance to bolster their AI systems would provide much needed clarity and conviction to the industry and customers alike.
Chugh is an independent consultant with Dvara Research working on themes of digital financial inclusion and digital social protection. Khanna is a Research Associate with Dvara Research. This article benefitted from the joint work of Dvara Research and PWC