ex castler founder image with geometric background and fintech elements

Amit Sinha on AI’s Breakthrough Role in Payments, Fraud and Programmable Money

Kumar Amit Sinha shares where AI is creating real impact in Indian fintech, from payments and fraud detection to reconciliation, compliance, and programmable money.

Where AI Actually Lives Inside Indian Fintech: Kumar Amit Sinha on Rails, Risk, and Programmable Money – NervNow
Interview AI in Fintech Payments Infrastructure

NervNow Interview Series

Where AI Actually Lives Inside Indian Fintech

Kumar Amit Sinha, former enterprise lead (VP) at Razorpay and ex-co-founder of Castler, spoke with NervNow about where AI is genuinely altering the infrastructure layer of Indian fintech, why most institutions are still experimenting at the edges, how RBI’s caution should be read, and what programmable money will demand from the rails.

NN
NervNow Editorial May 2026
K
Kumar Amit Sinha
Advisor to Startups and Founders  ·  Ex-Co-Founder and COO, Castler  ·  Ex-VP, Razorpay

Kumar Amit Sinha has spent nearly two decades building and scaling payments and banking infrastructure for India’s digital economy. As an early bird at Razorpay’s enterprise business, he grew monthly GMV from 800 crore to 75,000 crore. He subsequently co-founded Castler, a digital escrow platform, connecting more than 15 banking partners with enterprises across the country. He is currently advising startups and founders on fintech infrastructure and growth.

Kumar Amit Sinha has spent nearly two decades at the layer of Indian fintech that customers never see, across payment gateways, escrow systems, and bank-enterprise rails that look very different on the surface but circle the same problem at the center: how does money move at scale when trust between counterparties is never automatic? From scaling Razorpay’s enterprise business through India’s most consequential decade of digital payments to co-founding Castler’s digital escrow platform at the seam between banks and enterprises, he has operated at the intersection of payments, risk, and regulated trust through several generations of infrastructure change. NervNow spoke with him about where AI is actually changing how Indian fintech works at the core, why most institutions are still experimenting at the edges, how RBI’s caution should be read, and what programmable money will demand from the rails.

NervNow
You have spent your career at the infrastructure layer of Indian fintech, where money actually moves between institutions rather than where customers see it. From that vantage point, where is AI genuinely changing how financial infrastructure works, and where is it still more promising than reality?
Kumar Amit Sinha

From the infrastructure layer, AI’s impact is very real but also very misunderstood.

Where it is genuinely working is in reconciliation, exception handling, and operational intelligence. These are high-volume, rule-heavy problems where AI can meaningfully reduce manual effort and improve accuracy. Systems are moving from reactive reporting to real-time diagnosis: understanding not just what failed, but why. AI is also strengthening fraud detection, not by replacing rules, but by making them more adaptive and context-aware.

However, much of the narrative around AI transforming core financial infrastructure is still ahead of reality. Money movement is deterministic, regulated, and audit-heavy. You cannot have black-box systems deciding how funds flow. Autonomous treasury and fully AI-driven compliance remain constrained by trust, accountability, and regulatory expectations.

The real shift is more subtle but powerful. AI is not changing the rails. It is adding an intelligence layer on top of them: moving systems from record-keeping to decision-support. The future of financial infrastructure is AI making it more observable, explainable, and optimizable, not AI replacing it.

AI is not changing the rails. It is adding an intelligence layer on top of them.

NervNow
Most AI deployments in fintech sit on top of existing infrastructure: scoring, recommendations, chatbots. The harder problem is embedding AI into the core of how transactions are routed, settled, and reconciled. How does that change the engineering and risk calculus, and how many institutions in India are actually doing it?
Kumar Amit Sinha

Most AI in fintech today still sits at the edges. The real challenge is pushing it into the core transaction layer, where money actually moves.

Once you do that, both engineering and risk fundamentally change. On the engineering side, you move from fully deterministic systems to hybrid architectures where AI makes real-time decisions within tightly controlled boundaries. That means building for latency, explainability, auditability, and fail-safe overrides. You cannot afford a black box when money is in motion.

On the risk side, it shifts from system reliability risk to decision risk. A wrong model output can misroute funds, impact settlements, or create regulatory exposure. AI has to be policy-bound, observable, and always override-able.

As for adoption, it is still very early. Fewer than five to ten institutions in India are meaningfully embedding AI into core routing or settlement logic. Players like Pine Labs, Razorpay, Cashfree, and PayU are experimenting at the edge, with some selective movement inward. Among banks, ICICI and HDFC are exploring it in controlled, narrow use cases. The pipes have not changed. What is changing is the intelligence sitting on top of them, and slowly beginning to seep into them.

NervNow
Reconciliation has been a chronic problem in enterprise finance for decades: manual, delayed, error-prone. AI should be well-suited to fix it. Why has progress been slower than expected, and what does a genuinely well-solved reconciliation problem look like when AI is doing the work?
Kumar Amit Sinha

Reconciliation has been slower to crack with AI because the problem is not intelligence. It is data and context. Financial data is fragmented across banks, gateways, and internal systems, often inconsistent and incomplete. More importantly, reconciliation is not just pattern matching. It requires understanding business events: refunds, reversals, delays, failures. Add to that the cost of being wrong, and institutions are forced to keep humans in the loop.

When it is genuinely solved, reconciliation becomes almost invisible. You are looking at near real-time, 95 to 99 percent automated matching, with AI not just identifying exceptions but explaining them and triggering corrective actions: retries, settlements, alerts. In that world, reconciliation stops being an operational burden and becomes a continuous, self-correcting system of intelligence.

NervNow
Fraud in digital financial infrastructure has become increasingly sophisticated: coordinated, network-level, designed to look like legitimate behavior until it is too late. What does the current fraud landscape look like from inside the infrastructure, and how far behind are most institutions’ detection systems?
Kumar Amit Sinha

From inside the infrastructure, fraud today does not look like fraud. It looks like perfectly normal behavior at scale. It is no longer about a single bad transaction or user. It is coordinated, network-level activity: mule accounts, device farms, and flows moving rapidly across institutions, exploiting timing gaps and visibility silos.

Most detection systems are still built for a different era. They are largely rule-based and node-level, while fraud has become adaptive and network-level. Each bank or platform sees only its slice of the transaction, whereas the fraudster is operating across the entire graph.

The industry is typically one to two steps behind. By the time a pattern is detected and rules are updated, the fraud has already evolved. The gap is this: fraud has become systemic and coordinated, while detection remains fragmented and reactive.

NervNow
The core tension in fraud detection is between sensitivity and false positives: flag too little and you miss fraud, flag too much and you freeze legitimate transactions. How do you think about calibrating that balance as transaction volumes scale, and does the right answer change at different stages of growth?
Kumar Amit Sinha

The balance between fraud detection and customer friction is one of the hardest problems in financial infrastructure, and there is no static answer. It is a continuously moving target.

At its core, you are not choosing between catching fraud and protecting customer experience. You are designing a system that adapts in real time to both. That means moving away from binary decisions: block or allow. Toward risk-based, context-aware responses. A low-risk transaction should pass seamlessly. A high-risk one might trigger step-up authentication or delayed processing rather than an outright decline.

As transaction volumes scale, this balance becomes more data-driven and dynamic. You evolve from rule-based systems to adaptive risk models that learn from feedback loops: what was fraud, what was a false positive, and recalibrate continuously.

The right balance also changes with the stage of the business. Early on, you bias toward minimizing false positives to build trust and growth. As you scale, you introduce more controls and segmentation. At maturity, the goal is precision: minimizing fraud without compromising experience using deeper intelligence across networks. The winners are not those who block the most fraud, but those who do it with the least visible friction.

The winners are not those who block the most fraud, but those who do it with the least visible friction.

NervNow
AI fraud models are typically trained on historical patterns, which means they are always, to some degree, fighting the last war. How do you build detection systems that stay ahead of fraud that has not happened yet, and what does that require architecturally?
Kumar Amit Sinha

The fundamental limitation of AI in fraud detection is that it learns from what has already happened, while fraudsters are constantly inventing what has not. If you rely purely on historical models, you are always reacting, never leading.

The way forward is to shift from pattern recognition to behavior intelligence. Instead of asking whether we have seen this fraud before, systems need to ask whether this behavior looks normal in this context. That is a very different lens.

Architecturally, this means building for real-time rather than hindsight. Streaming data pipelines, a unified feature layer across accounts, devices, and transactions, and critically, a graph view of the ecosystem, because modern fraud is networked rather than individual. On top of that sits a decision engine combining models, rules, and human overrides, with continuous feedback loops.

The institutions that get ahead are the ones treating fraud systems not as static models, but as living systems: constantly learning, testing in shadow mode, and recalibrating. You do not beat new fraud by predicting it. You beat it by detecting what looks abnormal before it becomes obvious.

NervNow
When AI is making real-time decisions inside transaction flows that touch regulated entities, the accountability question becomes genuinely complicated. How is the industry thinking about who is responsible when an AI-driven decision causes harm, and is that thinking keeping pace with how fast AI is being deployed?
Kumar Amit Sinha

The industry has taken a clear stance: accountability cannot be outsourced to AI. In any transaction flow involving regulated entities like banks or NBFCs, final responsibility still sits with the institution, regardless of whether the decision was driven by an algorithm or a human.

In practice, AI is being positioned as a decision-support layer rather than a decision-maker. Every AI-led action is expected to be bounded by policy, backed by audit trails, and where needed, reversible or overridable. AI generates intelligence, but the regulated entity remains the accountable principal.

The challenge is that deployment is outpacing governance. AI is already embedded in real-time flows including routing, risk scoring, and anomaly detection, but frameworks around model liability, explainability, and shared accountability are still evolving. Most institutions are managing this gap through conservative design: fallback rules, human-in-the-loop systems, tighter controls.

The thinking is directionally right but not fully mature. AI may influence decisions, but in financial infrastructure, accountability still firmly sits with the entity that holds the license.

NervNow
RBI’s approach to AI in financial services has been cautious and incremental. From the perspective of someone who has built infrastructure that sits between banks and enterprises, is that caution appropriate, or is it slowing down innovation that would make the system safer and more efficient?
Kumar Amit Sinha

RBI’s caution is not a constraint. It is a recognition of where failure actually sits.

When you operate at the layer where money moves between institutions, the cost of getting it wrong is not a bad user experience. It is a systemic risk. In that context, RBI’s incremental approach to AI, demanding explainability, auditability, and human oversight, is both rational and necessary.

That said, the slowdown is real. The caution has meant that AI adoption is largely confined to the edges: fraud detection, analytics, support. The core orchestration layer remains rule-driven. This limits the system’s ability to evolve toward more adaptive, network-aware intelligence that could reduce failures and fraud at scale.

RBI is right to be cautious at the core. The real opportunity lies in enabling faster innovation around it. Keep the rails deterministic and tightly governed. Allow controlled experimentation on top, with clear guardrails. That is where the next leap in safety and efficiency will come from.

RBI’s caution is not a constraint. It is a recognition of where failure actually sits.

NervNow
There is a version of compliance in fintech that is reactive: you build the product and figure out how to make it compliant later. And there is a version where compliance is designed in from the start. As AI takes on more authority inside financial systems, which approach is the industry gravitating toward, and what are the consequences of getting that wrong?
Kumar Amit Sinha

The industry is clearly moving toward compliance-by-design, especially as AI starts influencing real financial decisions. When models are embedded into flows like underwriting, routing, or fraud control, you cannot retrofit compliance later. It has to be coded into the system from day one.

Most serious players today are building with policy guardrails, audit trails, explainability layers, and human override mechanisms. The ones still taking a reactive approach tend to do so at the edges, not in core infrastructure.

The risk of getting this wrong is severe: regulatory action, loss of licenses, systemic trust breakdown with banks and enterprises, AI decisions that become unexplainable, and the costly re-architecture of entire systems.

With AI, compliance is no longer a checkpoint. It is part of the architecture. Get it wrong, and you do not just fix a feature. You rebuild the system.

NervNow
You scaled Razorpay’s enterprise business 375x in monthly GMV. That kind of growth puts extraordinary stress on every layer of the stack: infrastructure, fraud systems, compliance, trust. What breaks first when AI-powered financial systems scale that rapidly, and what does that experience tell you about how institutions should be building today?
Kumar Amit Sinha

At that scale, what breaks first is not the core infrastructure. It is the trust layer around it.

You start seeing cracks in reconciliation, where money moves faster than systems can explain. In fraud detection, where attackers adapt faster than models. In compliance, where edge cases grow faster than policies.

AI accelerates all of this. It improves speed and decisioning, but it also amplifies any weakness in data, controls, or feedback loops. The key learning is that institutions should not build for scale as an afterthought. They need to build for trust at scale from day one: keep core systems deterministic and auditable, layer AI with clear guardrails and human override, invest early in reconciliation, risk, and explainability, and design for real-time visibility rather than post-facto fixes.

At scale, growth does not break your systems. Lack of trust architecture does.

NervNow
India’s digital economy is moving toward programmable money: conditional disbursements, tokenized assets, smart contracts. That is a fundamentally different operating environment for financial infrastructure. Where does AI fit into that future, and what do institutions that want to remain relevant in that world need to be building right now?
Kumar Amit Sinha

Programmable money changes the role of financial infrastructure from moving money to enforcing logic on money. That is exactly where AI becomes critical.

In a world of conditional disbursements, tokenized assets, and smart contracts, the question is no longer whether the payment happened. It is whether the payment should happen, and under what conditions. AI becomes the layer that interprets context, monitors behavior, and enables dynamic decisioning on top of deterministic contracts.

Its role will evolve across three areas: as a context engine validating whether real-world conditions for a transaction are actually met; as a risk intelligence layer detecting manipulation, collusion, or edge-case abuse in programmable flows; and as an optimization layer routing, timing, and structuring transactions for efficiency and success rates. Importantly, AI will not replace the rules. It will sit alongside them, making systems more adaptive without compromising control.

For institutions that want to stay relevant, the shift starts now. They need programmable rails that support conditional logic rather than just transfers. Real-time, event-driven architectures, because batch systems will not survive in this world. AI-ready layers with feature stores, decision engines, and explainability built in. And trust architecture: auditability, compliance-by-design, and clear accountability frameworks. The winners will be those who combine deterministic execution with intelligent decisioning.

The winners will be those who combine deterministic execution with intelligent decisioning.

Disclaimer: The views expressed in this interview are personal to Kumar Amit Sinha and do not represent the positions of any organization he is currently or formerly associated with, or of NervNow.
Sources
  1. Razorpay, public statements on enterprise GMV growth and product expansion. Available at razorpay.com.
  2. Castler, digital escrow infrastructure for Indian enterprises. Available at castler.com.
  3. The University of Chicago Booth School of Business, executive education and alumni records.
© 2026 NervNow    All rights reserved
Avatar photo
Ojasvi Nath

Lasă un răspuns

Adresa ta de email nu va fi publicată. Câmpurile obligatorii sunt marcate cu *

Stay updated with NervNow Weekly

Subscribe now