© 2026 NervNow™. All rights reserved.

Georg Langlotz, UBS: India Does Not Have an AI Talent Problem, It Has an AI Ambition Problem.
Georg Langlotz, Global Head of AI Centre of Excellence at UBS, spoke with NervNow about building AI inside one of the world's most regulated institutions, what it actually takes to move compliance AI from pilot to production, the governance architecture most banks are not yet building, and why India does not have an AI talent problem but an AI ambition problem.

NervNow Interview Series
Accountability Cannot Be Automated. Someone Has to Own the Outcome.
Georg Langlotz, GCRG Head of AI Centre of Excellence at UBS, spoke with NervNow about building AI inside one of the world’s most regulated institutions, what it actually takes to move compliance AI from pilot to production, the governance architecture most banks are not yet building, and why India does not have an AI talent problem but an AI ambition problem.
Georg Langlotz leads the AI Centre of Excellence at UBS, where he has spent years building AI governance frameworks, deploying compliance AI agents, and scaling AI capability across one of the world’s largest financial institutions. He has worked extensively at the intersection of machine learning, regulatory compliance, and enterprise transformation, including leading a cross-border AI team built between Switzerland and India. His work on moving AI from pilot to production in highly regulated environments has shaped how global banks think about model risk, explainability, and accountability in the age of generative and agentic AI.
Georg Langlotz does not talk about AI the way most technology leaders do. He does not reach for the benchmark score or the headline capability. He reaches for the harder question: who is accountable when something goes wrong? As Global Head of AI Centre of Excellence at UBS, he has spent years building the governance architecture, the operating models, and the human systems that allow AI to work inside one of the most regulated environments on earth. NervNow sat down with him to find out what that looks like from the inside, and what financial institutions, enterprise leaders, and Indian organizations need to understand about where AI governance is heading.
Every decade in financial services has had its technology moment. Client-server, the internet, cloud, mobile, and then generative AI and agentic AI arriving almost simultaneously. Each one was disruptive in its time. But when I reflect on what makes generative AI genuinely different, two things stand out that I cannot say about any previous technology wave.
The first is that every previous technology delivered either productivity or creativity. Generative AI delivers both at the same time. That combination is unprecedented. It does not just help you do existing tasks faster. It opens entirely new possibilities in how work gets done and what gets created.
The second is more fundamental. Previous IT was built around deterministic problem-solving: the same input always produces the same output. Generative AI is non-deterministic. That is a profound shift, because it is much closer to how humans actually think and work. We do not always produce the same answer to the same question. That quality is what gives generative AI its power, and it is also what creates the challenges people call hallucination. It is not a flaw to be fixed. It is an inherent property of a technology that thinks creatively rather than computationally.
The organizations getting this wrong are treating AI like a faster version of what they already do. The ones getting it right are asking a different question entirely: what decisions can now be made better, faster, and at scale, and what does that mean for the humans who used to make them? That reframe is where the real transformation starts.
Compliance is actually the ideal proving ground for AI, precisely because the stakes are high and the constraints are real. You cannot move fast and break things when the thing you might break is a regulatory framework or a client’s financial future.
What it taught me is that constraint is a design input, not an obstacle. Every decision about what to build was shaped by the question: Can I explain this to a regulator, a senior risk committee, and a frontline analyst in the same breath? If the answer is no, the model is not ready. That discipline forced us to build things that were genuinely robust, not just technically impressive. It also meant that when we got sign-off, we got it with conviction, not just tolerance.
The conventional view is that governance is the handbrake. My experience is the opposite: bad governance is the handbrake. Good governance is the accelerator.
When you have no framework, every AI deployment triggers a bespoke risk conversation from scratch. Every stakeholder wants to relitigate the same questions. Every legal, compliance, and risk team runs its own parallel review. That is what slows you down.
When you have a clear governance framework, those conversations are already answered at the structural level. You walk in with a documented risk taxonomy, a model validation approach, an accountability map, and an explainability standard. The conversation moves from “Should we do this?” to “Here is how we have already thought about this.” Deployment timelines compress dramatically.
The governance architecture we built at a large Swiss bank demonstrated this directly. That upfront investment paid back with speed at every subsequent deployment. My conviction has not changed: governance enables trust, and trust enables speed.
I want to be precise here, because the honest answer is more nuanced than it first appears. Model drift is not new. Data drift, concept drift, model drift, and output drift all arrived with machine learning, and any well-run ML governance team was already managing them. So if a bank’s model risk function has been doing ML governance properly, it is not starting from zero.
But AI introduces dimensions that genuinely require the existing framework to be extended. The most significant is non-determinism. Traditional ML models are deterministic: the same input produces the same output. Generative AI is not. That breaks the standard validation paradigm, because you cannot simply validate once and monitor for drift. The output space is inherently variable, and that variability is a feature, not a defect.
The second is explainability. Deep learning models are structurally harder to explain than statistical or traditional ML models. Some regulators are beginning to require explainability at a level that deep learning architectures cannot easily provide, which creates a genuine tension between model performance and regulatory compliance.
The third, and the one I find most underappreciated in practice, is the layered architecture that most regulated AI deployments actually use. You typically have a rule-based control layer sitting on top of the AI model. That means you are managing drift and explainability at each layer independently, and also managing the interaction between them. That adds meaningful complexity that most existing model risk frameworks have not yet fully addressed.
So the answer is: extend, not replace. But extend thoughtfully, and do not assume the ML governance work you have already done covers everything that AI requires.
From the inside, it looks like a graveyard of good ideas. Dozens of pilots celebrated at launch. Almost none are in production 18 months later.
The failure modes are almost always identical. The pilot was built in isolation from the business process it was meant to improve. The data in the pilot was cleaner than the data in production. Governance was deferred until after success, which meant it became the blocker at exactly the wrong moment.
Crossing that gap requires three things most organizations underinvest in: a production-grade data foundation, a governance framework ready before you need it, and an operating model with clear ownership for AI in production, not just AI in development. The organizations that have scaled AI are not the ones with the best models. They are the ones with the best discipline around those models. Technology is easy. The operating model is hard.
The organizations that have scaled AI are not the ones with the best models. They are the ones with the best discipline around those models.
The honest account starts with a deliberate choice to start small. We did not try to build something sophisticated on day one. We started with a narrow, well-understood use case where we had clean data and a clear definition of what good looked like. Then we grew the complexity of the model incrementally as confidence and evidence built up. That sequencing matters enormously, and most teams get it wrong by overreaching early.
The second principle we held to throughout was human in the loop. At every stage, a human reviewed and owned the output. We never removed that accountability, even as the model’s accuracy improved. That was both a governance requirement and a trust-building strategy. The team needed to see the model earn its place before they would genuinely rely on it.
The third decision, and in retrospect the most important for adoption, was to integrate the interface directly into the existing application that the compliance analysts were already using every day. We did not build a separate AI tool that people had to switch to. We brought the capability into their workflow. The adoption rate was high precisely because there was no adoption barrier.
The genuine innovation was what we called moving from the four-eye principle to the human-and-machine principle. In banking, the four-eye principle means every decision is reviewed by two humans. We replaced the second human eye with a machine. That freed up significant capacity, allowed the team to work faster, and actually increased quality because the machine was consistent in ways a fatigued human reviewer is not. The outcome was that we could catch more issues, reduce false positives, and mitigate the kind of regulatory fines and high-pressure situations that make compliance work so stressful in the first place.
Let me start with a conviction that I think most organizations are not yet taking seriously enough: we are moving toward a world where there will be more agents than employees. That is not a distant scenario. It is a near-term operational reality, and leaders who are still thinking about AI as a tool that sits alongside their team are already behind the frame.
The second conviction is about what happens when agents are not just deployed individually but orchestrated well as a coordinated system. The productivity multiplier is not incremental. We are talking about a factor of two or more. That kind of step change does not come from automating one task. It comes from redesigning how work flows across humans and agents together.
But the conviction that matters most for leaders managing people through this transition is this: agents are not people replacers. They are task replacers. That distinction sounds simple, but it changes everything. When a leader frames AI as replacing tasks, the team can engage with the change constructively. When it gets framed, even implicitly, as replacing people, you lose trust, and you lose the adoption you need for any of this to work.
The honest answer about what actually changes for the individual is their relationship with the work itself. The tasks that used to consume 60% of the day disappear or compress. What remains is the work that requires genuine judgment, contextual understanding, and human accountability. For most people, that is more interesting, but it also requires new skills. The transition works when leaders are honest about that, invest in upskilling proactively, and involve the team in defining what the new work actually looks like.
Agents are not people replacers. They are task replacers. That distinction sounds simple, but it changes everything.
The industry has not fully figured this out, and we should be honest about that.
Imagine an autonomous agent flagging a transaction as suspicious and triggering a client freeze. The flag turns out to be wrong. The client is a high-value relationship. Who owns that outcome? The data scientist who built the model? The compliance officer who approved the deployment? The vendor who supplied the underlying system? In most banks today, that question has no clean answer, and that is a serious problem.
The principle I work from is that accountability cannot be automated. You can delegate a task to an agent. You cannot delegate accountability for the outcome. A human, at a specific point in the decision chain, must own the result. Every agentic deployment needs an accountability map: who approved it, who monitors it, who has authority to intervene, and what triggers that intervention.
The banks building that architecture now, before the regulator requires it, will be the ones shaping what the standard becomes. That is the position you want to be in.
You can delegate a task to an agent. You cannot delegate accountability for the outcome.
It was one of the most formative and humbling experiences of my career.
The talent was exceptional, genuinely world-class. The challenge was never capability. It was context. Building AI for compliance in a Swiss bank from a team in India required serious investment in shared understanding: the regulatory environment, the risk culture, the unwritten rules of how a global bank actually operates. Teams that treated context transfer as a one-time onboarding event struggled. Teams that built it as a continuous practice thrived.
What surprised me was how quickly the India team moved from execution to genuine intellectual partnership once that context was established. They were not just building what we specified. They were challenging the specification. That shift happened faster than I expected and produced better outcomes than I had planned for. The 30% capacity shift was real. But the more valuable outcomes were depth and speed, not just volume.
My quotable line from that experience: build the context before you build the model.
Build the context before you build the model.
Stop exporting the talent and start compounding it domestically.
India graduates more AI and data engineers annually than almost any country on earth. The risk is that the default model remains building for others, executing briefs defined elsewhere, optimizing for delivery rather than invention. The opportunity is to redirect that extraordinary capability toward solving India’s own hard problems at scale, in healthcare, agriculture, financial inclusion, and public infrastructure, where the data complexity and the stakes are as high as anywhere in the world.
The organizations that will build lasting advantage are the ones investing in problem definition alongside technical execution. AI engineering is abundant. AI product thinking is scarce. That is where the next generation of Indian AI leaders will be built, and the organizations that develop that muscle now will be impossible to catch later.
India does not have an AI talent problem. It has an AI ambition problem. The talent is ready. The question is whether the organizations are.
India does not have an AI talent problem. It has an AI ambition problem. The talent is ready. The question is whether the organizations are.
The voluntary window is an opportunity, not a holiday.
Every jurisdiction that started principles-based has moved toward binding regulation as capability and risk scaled together. Europe did it with the AI Act. The U.K. is moving that way. India will follow the same arc, and in sectors like banking, financial services and insurance and health tech, where the data sensitivity is highest, that moment may come sooner than most leaders expect.
What Indian enterprises should be doing now is building governance architecture while they have the freedom to design it thoughtfully, rather than retrofit it under pressure. Model inventories, explainability standards, accountability maps, and AI risk taxonomies. None of that limits innovation. All of it positions you as a trusted operator when the regulator arrives. The enterprises that shape the Indian AI governance standard will be the ones that showed up early with something credible already built.
The most honest advice I can give is this: do not pretend you have all the answers, because your people will know you do not.
AI is moving faster than any leader’s ability to fully map the terrain. The leaders who build genuine trust in this moment are not the ones who project certainty. They are the ones who are honest about what is changing, clear about what will not change, which is the value of human judgment, relationship, and accountability, and who involve their teams in figuring out the path rather than announcing it from above.
The practical version of that is simple. Bring your team into the AI journey as participants, not recipients. Let them identify the use cases. Let them flag the concerns. Let them co-design what the new work looks like. When people have a hand in building the future, they stop fearing it.
And protect the fundamentals. Skills, growth, dignity, and clarity about what is expected. If you get those right, the technology almost takes care of itself.
When people have a hand in building the future, they stop fearing it.
- Lovable’s Whitney Menarcheck on AI Opportunity, Community-First Thinking, and More
- Europe Lost AI, Quantum is Next Unless We Act Now: Aneli Capital’s Daiva Rakauskaitė
- Women Must Understand How AI is Sold, Procured, and Regulated: Chaitra Vedullapalli, Women In Cloud
- India’s AI Problem is Product Thinking: Hoonartek’s CEO Peeyoosh Pandey
- AI Governance Must Match AI Authority: FSS’s Rishi Verma on Fraud, Reconciliation, and the Next Phase of Intelligent Payments
- Without System Maturity, AI Remains Just a Tool: Hector’s Meher Patel on What Brands Keep Getting Wrong







