Rishi Verma, AI Head, Fss and Fss logo

AI Governance Must Match AI Authority: FSS’s Rishi Verma on Fraud, Reconciliation, and the Next Phase of Intelligent Payments

Speaking to NervNow, Rishi Verma, Head of Artificial Intelligence at FSS's Center of Excellence, on how to make payment infrastructure intelligent, accountable, and self-regulating at scale.

Share your love

Speaking to NervNow, Rishi Verma, Head of Artificial Intelligence at FSS’s Center of Excellence, makes the case that India’s payments industry has largely solved the speed problem and is now staring at a far harder one: how to make payment infrastructure intelligent, accountable, and self-regulating at scale. He covers what that shift looks like across fraud detection, reconciliation, and compliance, and why governance frameworks need to catch up before AI accumulates more authority than the industry is ready to oversee.

India’s real-time payments infrastructure has scaled faster than almost any other market in the world, and UPI has made near-instant settlement a baseline expectation. The race to make payments faster has largely been run and won. What the industry is now working through is considerably harder; building systems that are intelligent enough to make decisions at scale and accountable enough to satisfy regulators who are still catching up with what AI inside financial infrastructure actually means.

Rishi Verma, Head of Artificial Intelligence at FSS’s Center of Excellence, sits at that intersection. With 14 years across AI, payments, and financial risk at RBL Bank, Diebold Nixdorf, L&T Infotech, and FirstRand Services, and now at FSS, which processes over 3 billion transactions annually and holds a 20 percent share of India’s e-commerce traffic, Verma sheds light on how AI is moving from a background scoring engine into the core of how payments are routed, authenticated, and settled.

Excerpts:

NervNow: For years, AI in banking meant fraud detection tools or customer chatbots. What changes when AI becomes embedded directly into core payment workflows rather than functioning as a separate layer?

Rishi Verma: When AI becomes embedded into core payment workflows, the shift is structural rather than functional. Historically, AI operated as a post-processing intelligence layer with fraud engines scored after transaction initiation, chatbots handled customer queries, batch AML systems ran overnight, and risk scoring remained external to core settlement engines. In this model, AI influenced outcomes indirectly.

When embedded directly into core rails, AI influences authentication sequencing, transaction routing across payment rails, dynamic limit management, real-time liquidity forecasting, intraday exposure calculation, settlement prioritization logic, and fee optimization decisions. Inline inference must execute within 50 to 120 milliseconds in high volume real-time systems, supported by containerized microservice-based inference, auto-scaling CPU and GPU orchestration, deterministic failover logic, and pre-computed feature stores.

If predictive liquidity models reduce buffer requirements by 5 to 10 percent, the capital efficiency gain becomes material at scale. For example, a payment institution holding Rs 10,000 crore in precautionary liquidity buffers can free up Rs 500 to 1,000 crore through improved predictive accuracy. Institutions that embed AI in core flows report 10 to 18 percent reduction in transaction abandonment, 15 to 25 percent fraud loss reduction, and 8 to 15 percent routing cost optimization.

The fundamental change is that AI transitions from analytical tool to system control logic.

NN: As transaction volumes scale in real-time ecosystems, how must core payments architecture evolve to remain intelligent, resilient, and compliant at the same time?

RV: Real-time ecosystems processing tens of thousands of transactions per second require architectural transformation across three dimensions: event-driven architecture with streaming intelligence, parallel decision orchestration, and compliance-by-design infrastructure.

Batch processing is incompatible with real-time scale. It requires event streaming, stateless scoring services, distributed feature computation, and in-memory data grids, resulting in 30 to 40 percent latency reduction and 25 to 35 percent reduction in processing bottlenecks. Fraud, AML, credit, liquidity, and reconciliation must execute concurrently. Each AI decision must log model version ID, feature vector snapshot, confidence score, explainability output, and timestamped inference context, enabling regulatory replay, audit reproducibility, and model drift investigation.

Institutions that operationalize compliance-by-design report 20 to 30 percent faster regulatory audit cycles and 15 to 25 percent reduction in compliance operating cost. Resilience includes decision traceability under stress conditions.

NN: Is AI primarily automating tasks or fundamentally redesigning workflows such as reconciliation, dispute handling, and risk management?

RV: AI maturity determines whether it automates or redesigns. In reconciliation, automation delivers 92 to 97 percent auto-matching accuracy and 40 to 60 percent manual effort reduction. Redesign introduces predictive break probability scoring, intraday exposure alerts, and automated counterparty pre-notification, resulting in 50 to 70 percent faster break resolution and 15 to 20 percent reduction in overnight liquidity buffers.

In dispute handling, NLP classification of dispute codes achieves 85 to 92 percent accuracy. Predictive chargeback likelihood scoring, pre-dispute customer communication, and merchant risk advisory alerts result in 12 to 20 percent reduction in dispute incidence and 18 to 25 percent faster resolution. In risk management, dynamic transaction-level exposure recalibration and real-time credit overlay adjustments produce 3 to 7 percent optimization in risk-weighted asset allocation.

AI redesigns workflows when it shifts decision timing from reactive to anticipatory.

NN: Reconciliation has traditionally been treated as a back-office accounting function. How can AI transform reconciliation into a live operational control system rather than a delayed balancing exercise?

RV: In real-time ecosystems, reconciliation delay equals systemic risk amplification. Using supervised learning on historical mismatch data, predictive break identification achieves 70 to 85 percent precision before settlement close and reduces exception backlog by 50 to 65 percent. Event-by-event ledger cross-verification reduces T+1 reconciliation dependency and manual investigation cycles.

Intraday net position prediction models achieve 85 to 95 percent forecasting accuracy in stable conditions and 70 to 80 percent during volatility spikes, resulting in 10 to 20 percent lower precautionary liquidity reserves and improved intraday capital utilisation. Reconciliation evolves into a continuous risk monitoring function.

NN: How can AI simplify compliance processes without creating new transparency or explainability challenges?

RV: AI simplifies compliance through intelligent transaction monitoring, behavioral anomaly clustering, NLP-driven regulatory mapping, and continuous control validation, producing 20 to 35 percent reduction in AML false positives, 25 to 40 percent reduction in manual review hours, and 15 to 30 percent faster suspicious activity reporting.

New risks include model opacity, bias risk, and drift vulnerability. Mitigation requires explainability logging, population stability index drift monitoring with trigger thresholds above 0.2, independent model validation teams, and quarterly bias audits. AI must be explainable, stress-tested, and reviewable.

NN: As transaction velocity increases across real-time systems, how must fraud and anomaly detection evolve from static rule-based engines to adaptive, self-learning models?

RV: Static rule engines often produce 2 to 5 percent false positive rates, along with customer friction and escalation overhead. Adaptive machine learning systems demonstrate 20 to 35 percent false positive reduction, 10 to 20 percent improved fraud detection rate, and 5 to 10 percent improved approval rate. Graph-based models identify mule account networks, synthetic identity clusters, and coordinated micro-transaction rings, delivering 25 to 40 percent higher fraud ring detection efficiency.

Fraud detection must become behavioral, contextual, network-aware, and continuously retrained.

NN: What does it take to move from AI experimentation to institutionalising AI as a core organisational capability?

RV: Institutionalization requires alignment across technology, governance, and business strategy. Technology requires enterprise feature store, model registry, automated retraining pipelines, canary deployment mechanisms, and rollback capability within minutes. Governance requires AI risk committees, independent validation, quarterly stress tests, and vendor concentration monitoring.

Commercial alignment links AI to fraud loss reduction KPIs, cost-to-serve metrics, revenue lift targets, and capital efficiency goals. Organizations that institutionalize AI observe 2 to 3 times increase in model deployment rates, 30 percent faster production cycles, and 15 to 25 percent operational cost savings in target workflows. AI becomes operational doctrine.

NN: What metrics truly indicate that AI is delivering structural value rather than incremental optimisation?

RV: Surface improvements include lower latency and reduced manual hours. Structural indicators include fraud loss measured in basis points of transaction value, chargeback ratio, liquidity buffer reduction, cost per transaction at scale, regulatory penalty reduction, and operational risk capital impact.

Reducing fraud loss from 12 basis points to 8 basis points at billion-dollar transaction scale produces significant annual savings and capital relief. Reducing reconciliation break rate from 0.5 percent to 0.2 percent lowers systemic operational risk exposure. Structural value appears in balance sheet strength, capital efficiency, and reduced volatility in risk metrics.

NN: When AI influences authentication, transaction approval, or risk scoring, how should accountability and governance frameworks evolve to ensure trust and oversight?

RV: Governance must evolve across technical, operational, and strategic layers. The technical layer includes model version logging, replay capability, and confidence scoring storage. The operational layer includes human-in-the-loop thresholds, escalation protocols, and adversarial stress testing. The strategic layer includes board-level AI risk reporting, quarterly model risk dashboards, and external audit integration.

Institutions implementing structured AI governance report 20 to 30 percent fewer audit observations, faster model approval timelines, and reduced reputational risk exposure. AI governance must match AI authority.

NN: As AI becomes deeply embedded in payment infrastructure across institutions, does systemic risk become harder to detect because intelligence is distributed and automated? How should regulators and payment networks prepare?

RV: Distributed intelligence increases correlated model risk when multiple institutions use similar vendor models, training data overlaps, and fraud detection strategies converge. Mitigation strategies include cross-institution anomaly information sharing, model diversity requirements, mandatory scenario stress testing, and centralised AI observability hubs.

Systemic AI risk resembles correlated liquidity shocks through algorithmic behavior. Distributed intelligence improves efficiency but may amplify correlated failure.

NN: India leads globally in real-time transaction scale. What will define the next phase of payments evolution: greater speed, deeper intelligence, or stronger trust architectures built around AI?

RV: Speed gains are approaching diminishing returns. The defining factors will be deeper intelligence, including behavioral identity verification, transaction-level exposure adjustments, and predictive liquidity orchestration, along with stronger trust architecture including explainable AI decision logs, regulator-accessible dashboards, and real-time transparency frameworks.

Autonomous financial control systems will include self-adjusting fraud thresholds, automated compliance monitoring, and predictive systemic anomaly detection. The next phase is intelligent, capital-efficient, and self-regulating financial infrastructure.

Avatar photo
Ojasvi Nath

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay updated with NervNow Weekly

Subscribe now