Women looking at screen with AI stats

How Do Indian CXOs Actually Build a Culture Around AI?

AI has been inside Indian businesses for years. What changed is the pressure to scale it, show returns, and bring entire organizations along.

The pressure on Indian businesses to scale AI, show returns, and bring entire organizations along has never been higher, and most are discovering that the technology was never the hard part. Nine industry senior leaders share what the real work looks like from the inside.

AI has been inside Indian businesses for years. What changed is the pressure to scale it, show returns, and bring entire organizations along. Most brands today are stuck somewhere between a promising pilot and an organization that actually works differently because of it, and that gap is where most AI investments silently die.

The question of how to close it is rarely answered with honesty. It tends to get lost in platform announcements, adoption statistics, and executive enthusiasm that does not survive the first deployment. NervNow spoke to nine senior leaders across FMCG, retail, fintech, automotive, and consumer services to understand what building an AI culture actually demands, the decisions that made the work real, the assumptions that failed in practice, and where Indian organizations are actually getting it right.

The Pilot Trap: Most AI Projects Look Great in Demos and Die in Deployment

Across industries and company sizes, the pattern holds: most AI pilots collapse because the organization never agreed on what success looked like before the work began.

Rahul Bibhuti, General Manager (CEO) at Reckitt Nutrition said, “Never start an AI project before clearly identifying and signing off on the specific business metric you are trying to move. The pilot has to begin with a metric, not a model.”

Indranil Mukherjee, Deputy Vice President and Head of Customer Experience at PVRINOX, framed it from operational reality: pilots fail because the business is not ready. “At PVR INOX, a pilot only holds if the problem is clearly defined, data across POS, ticketing, and CRM is connected and reliable, and success is measurable from day one. His test is not theoretical but applied on a packed Friday evening show.

Varun Rustagi, Director of Product at Cars24, distilled it to three non-negotiables: a clear use case, a measurable success metric, and a business owner. Without all three, the pilot is a guess. If the workflow or data is already broken, AI will not fix it, it will scale the mess. The best pilots, he added, are small, sharply scoped, and designed backwards from deployment.

Rahul Chaudhari, Global E-commerce & Product Leader and formerly with Amazon and Kohl’s, added a harder edge: fix the data containers before you touch a model, connect the pilot to a business model shift, and write kill rules before you start. That last point is the one most teams skip. Knowing when to stop is as important as knowing how to begin. Anand Bhatia, Chief Data Officer and Analytics at HDB, reinforced this from a different angle: a pilot needs a simple, specific, well-understood expectation, agreed and accepted upfront, so it does not become a shiny toy or a hobby project.

Kalyani Viji Sridhar Seshadri, Head of National Customer Experience and Engagement at Titan/Tanishq, brought a longer view: a pilot worth running is one designed to hold, not just to run. That means committing to the customer journey for the long term, being precise about the problem you are actually solving, and having clear sight on scale, now and future. She also shared what happens when this discipline breaks down. A delivery platform she used as a customer rewarded her with compensation for a delay through a well-designed AI-driven solution. But when the escalation involving a human required a judgment call, the order never arrived. The AI worked. The end-to-end journey did not.

Sonali Singh, Head of Marketing at Switch Mobility (Hinduja Group), positioned the problem at a strategic level: AI adoption is a strategic imperative, not a tech upgrade. Deployment success needs to be measured accordingly, as an indicator of innovation and sustainable growth, not just cost savings. An AI agent that brings more impact in less time with less human intervention is showing real value. Many sales lead platforms today demonstrate exactly this, but only when the success parameters were defined to capture it.

The collective diagnosis says that AI pilots become expensive experiments when they are scoped around technology curiosity rather than business specificity. The demo works because the demo is controlled, and deployment is not.

The Talent Reality: You Cannot Hire Your Way Out of the AI Skills Gap

When organizations hit an AI skills gap, the instinct is to hire out of it. Every expert NervNow spoke to pushed back on this.

Bibhuti reframed the problem at its root: AI is not an IT project, so stop asking your IT leader to run it. The brand manager and the sales manager need to learn it, a little every day, with the same discipline described in Atomic Habits. New muscles require the gym, and that means everyone, daily.

Chaudhari separated two things organizations conflate: AI literacy and AI engineering. You do not need every product manager writing code. You need them to write a good brief, evaluate outputs, and know what AI cannot do. That is a six-week internal program, not a six-month recruiting cycle. He also argued for restructuring teams around domains rather than technologies: people learn faster by doing within their own area than by having a generic ML hire parachuted in. His third move: use AI itself to close the gap. The best teams embed LLMs as analysts that summarise patterns, propose hypotheses, and simulate scenarios, while humans stay in the seat for judgment and trade-offs.

Rustagi described what Cars24 has built: a small central team that understands AI deeply and holds the organization’s relationships with agentic AI and LLM companies; business, product, and ops leaders who know where to apply it; and frontline teams trained to use AI in daily work. The real advantage comes not from the central team but from when the SEO executive and the strategy head are both using AI tools inside their own workflows, which is what Cars24 is now seeing.

Anirudh Nandy, Associate Director at Razorpay, identified three moves that enterprise organizations are making: building AI-augmented roles rather than pure AI roles, investing in AI literacy at the middle management layer rather than just the engineering layer, and treating vendor relationships as talent pipelines.

Seshadri argued that the stronger move is intentional learning and development, aligned to what the business is actually trying to solve. Once that clarity exists, the AI-business-talent mix becomes easier to shape. The real work, she said, is helping people see where AI fits into how they already work.

Amit Verma, Founder of DigitUP, made the point that sits underneath all the others. The biggest driver of the AI skills gap in India is not the absence of AI courses, it is the absence of a data analytics culture. Before AI can compound, organisations need people who understand how to collect data systematically, build workflows that generate it, and make decisions from it. The AI layer comes after that foundation exists. His first AI implementation was in 2015, a neural network built in VBA Excel to control a reactor, done because the use case was clear, not because someone asked him to.

The Resistance Question: Are Employees Resisting AI, or Resisting Ambiguity?

When employees push back on AI initiatives, the instinct is to label it resistance. Chaudhari called that framing wrong from the start. In most organizations, employees are not afraid of AI, they are afraid of organizational ambiguity dressed up as a technology initiative. His prescription: lead with what changes in your job, not with what AI can do. The person in the room is thinking about one thing, am I being replaced? Make it concrete enough that they can see themselves in the future, not just the technology. He added a structural requirement: build truth-seeking rituals. If truth lives in the hallway instead of the meeting room, no AI initiative survives contact with reality.

Nandy rooted for a language shift: from “AI adoption” to “AI partnership.” The culture change accelerates the moment an employee has a personal win, a task that used to take two hours taking twenty minutes. That single experience moves faster than any change management program. He also noted that organizations need to pair this with rigorous AI governance to hold efficiency gains over time.

Bibhuti applied the Diffusion of Innovation framework directly: in every organization there are early adopters who need to be encouraged and made role models. The work is building concentric circles outward from them, relentlessly, with clear direction. And in AI, he emphasized, there is no you versus me, no vendor versus client. It is collaboration and co-creation.

Seshadri observed that most successful change journeys start with one small, usable AI solution, something people can adopt without rethinking everything they do. The first success use case matters more than the scale of the ambition. She also raised a sector-specific point: in jewellery, where purchase behavior is deeply personal and trust-led, a higher human-to-AI ratio often makes sense. The balance is not universal.

Rustagi described how the reframe happened at Cars24 is structural and not rhetorical. Dedicated Slack channels where teams share AI use cases they have solved created healthy internal pressure. CXOs addressed all employees directly about the AI transition, explaining the why. Creating awareness and receptiveness, he argued, is more important than fear of layoffs.

Verma and Bhatia both pushed back on the “AI-first culture” framing entirely. The culture should be customer-first, always. Human-centric, not AI-centric. Employees, Verma noted, have not been exposed to sufficient data analytics in the first place, and that is what is driving adoption friction, not resistance to AI itself.

The ROI Question: When Does AI Stop Being an Investment and Start Being Accountable?

Seshadri identified the moment the conversation shifts: AI begins to perform like an investment when it is plugged into the right problem, supported by the right data, and tied to a clear business deliverable, and when the organization does not stop at deployment. The consistent, disciplined effort to learn, adapt, and tweak after go-live is what converts experimentation into long-term ownership.

Nandy reframed the measurement itself. Organizations need to stop measuring AI ROI as a cost centre and start measuring it as a capacity multiplier. The question is not what AI saved, it is what the team did with the time AI gave back. If the answer is nothing changed, the problem is process design, not the AI.

Bhatia pointed to a failure mode that sits upstream of any ROI conversation: setting unrealistic expectations at the start. An AI system that promises to generate a finished video in thirty seconds and takes considerably longer, with significant rework involved, erodes trust before the business case can even be made. Expectations, he argued, need to be simple, specific, and genuinely agreed upon, not inherited from a vendor demo.

Rustagi described how Cars24 has operationalized accountability: all AI initiatives across teams and lines of business are tabulated, with business impact expressed in EBITDA terms. Cost savings in the QC pod are linked to man-hour reductions. Automation in challan fulfilment links to TAT reduction, which links to NPS improvement. The chain from AI activity to business number is made explicit, and AI pods are reviewed fortnightly. The mistake he sees most often is keeping AI protected from that kind of scrutiny for too long.

Verma kept the scope tight. DigitUP is currently integrating an AI system into its B2B product that will automatically identify which website pages need which schema markup, a task that previously required human classification at scale. The use case is contained, the workflow is understood, and the improvement is measurable. Moonshot projects, he noted, consistently fail. Tasks embedded in existing daily workflows, with sufficient data to operate from, do not.

Bibhuti reached for a longer frame: AI is like the web revolution, the information superhighway at the turn of the century. Deploying an AI programme is the starting point, not the finish line, and the organisation that treats it as a light switch will keep being disappointed. Continuous learning and continuous improvement are the ongoing work, and performance expectations need to be built into the journey from day one rather than demanded at the end of it.

Chaudhari proposed a phase structure for anyone still unclear on when to expect what: early on, you are buying learning, the question is whether there is a signal. Then you test whether that signal repeats across more customers or channels. Only then does AI become a real P&L line item with targets and accountability. Demanding revenue impact from a six-week experiment is the wrong question. Letting something run eighteen months without defining what perform means is the equal mistake. The discipline is in the gates, knowing what signal looks like before you start, what repeatable means before you scale, and having the courage to kill it if it is not working.

The One Decision: What Made the AI Journey Real

Mukherjee’s view from PVRINOX is grounded in what the operational reality of scaling actually demands: the organizations getting this right are building business and data translators, people who understand both the domain and the data, rather than importing generic AI expertise that then has to learn the business from scratch. A cinema operations professional who understands data, he argued, creates more impact than an AI hire trying to learn cinema operations.

Kalyani Viji Sridhar Seshadri pointed to something that sounds simple but rarely is: staying customer journey-obsessed while maintaining a sharp eye on current business priorities. Relevance, she said, is what turns ideas into usable solutions, and it is the absence of that discipline that keeps AI in perpetual pilot mode.

Nandy identified the inflection point as the decision to stop treating AI as a feature and start treating it as an architectural layer. Adding a recommendation widget or a chatbot is a feature. The real question is: what if the entire workflow were designed around how AI reasons, rather than how humans currently navigate a UI? The moment a team commits to that position and takes accountability for it, the AI journey becomes real.

Bhatia kept it to two things: clear KPIs and genuine business understanding. Without both, the AI journey stays theoretical regardless of the technology in use.

Rustagi said what made AI real at Cars24 was embedding it into core workflows rather than running it as a side experiment. The hardest lesson: a good demo does not mean a good business outcome. Adoption, process design, and trust in the system matter as much as the model. He also noted where the most instructive AI usage examples have come from, junior employees, where most of the routine work lives. Starting does not require expertise. Using AI, he said, is like using Excel: you cannot know all the formulas at once, so you start, keep exploring, and build from there.

Verma’s hardest lesson came from 2021. He had invested heavily in a smart chatbot, recognized before ChatGPT arrived that the space would be automated by a larger player, stopped development, and moved on. The lesson he drew from it: pursue enterprise-scale requirements with deep embedded workflows, build expertise across the entire process, and be patient. The tortoise wins.

Bibhuti identified the quality that holds everything together over the long run: resilience combined with learning agility. In retail, AI is already predicting sales and availability gaps based on recent trends and generating measurable uplift. In content, it covers roughly 80 percent of the work on briefs, positioning, and storyboards. The remaining 20 percent is where the experienced judgment still earns its place.

Chaudhari named the reframe that changed how he approached the whole question: stop asking which AI tool to buy and start asking how the business should work differently in an AI world. That shift moved the conversation from vendor evaluations to business model redesign. The hard lesson along the way was that months were spent testing tools and models when the real bottleneck was that product, customer, and event data did not speak the same language across systems. Fixing the data containers, unglamorous, foundational work, unlocked more value than any model upgrade. His summary: AI transformation is mostly a data problem and an org design problem wearing a technology costume.

What This Actually Adds Up To

Across five questions and nine perspectives, one pattern holds: the bottleneck in AI adoption in India is almost never the technology. It is the absence of a well-defined problem, broken data that no model can compensate for, organizational ambiguity that gets dressed up as employee resistance, and ROI expectations applied at the wrong stage of the journey.

The organizations that are actually moving started with the business metric, fixed the data before touching a model, and involved business owners from day one, treating AI as a learning culture rather than a one-time deployment. On the build-versus-buy question, Chaudhari’s view holds: build where your data and domain create a genuine moat. Buy everything else, but with clean APIs and standard data contracts, so a vendor can be swapped without a twelve-month migration project. Building something generic that three vendors already do well is the worst move. Buying something core and losing control of your differentiation is the second worst.

The leaders in this conversation have learned most of this the hard way. The common thread across all of them is that the AI journey became real not when they deployed a tool, but when they stopped treating AI as the answer and started treating it as a layer that sits on top of clear thinking, clean data, and organisational discipline. That part has no shortcut.

Avatar photo
Deepika Yadav

Lasă un răspuns

Adresa ta de email nu va fi publicată. Câmpurile obligatorii sunt marcate cu *

Stay updated with NervNow Weekly

Subscribe now