At ASCI’s ICAS Global Summit 2025, a panel featuring Kunal Guha, director, privacy - Chrome and Android, Google; Mary K Engle, executive VP, policy, BBB National Programs, USA; Sameer Chugh, chief legal officer, Games24x7, and Chandradeep Mitra, founder and CEO, Pipalmajik discussed responsible AI adoption.
Tanu Banerjee, partner, Khaitan & Co, moderated this discussion.
The session explored how organisations can balance AI’s opportunities with its risks while ensuring consumer interests remain a priority. They also explored AI’s impact on marketing and advertising, debating the necessity of transparency, consumer trust, and regulatory guardrails.
Opening the session, Engle addressed the impact of AI on human creativity, emphasising its role as a tool rather than a replacement. "We all hope AI will enhance, not replace, human creativity," she said. "Like pens and paint brushes, it’s another tool to expand creative potential."
She acknowledged concerns about over-reliance.
Engle remarked, "There’s a risk of AI becoming a shortcut, much like GPS reducing map-reading skills. The key is balance, embracing AI’s potential while preserving human ingenuity. And remain hopeful it will ultimately enhance creativity."
Guha described AI as both overhyped and underappreciated.
He said, "We track AI’s progress like a race among math prodigies, but its real power lies in how we use it. Imagine having the smartest mind in a pocket - AI can enhance human potential through real-time assistance, making it a force unlike anything before."
Chugh took a pragmatic view, stressing that AI is already transforming industries.
"We are in denial, thinking there’s still time, but AI is advancing rapidly. It’s not just another tool; it’s an evolving reality. Like any technology, it has its pros and cons, but when used well, the benefits are immense," he shared.
Mitra, balancing academic and industry insights, sees AI as both a game-changer and a challenge.
He expressed, "AI won’t slow down it’s moving too fast. By the time we understand its current uses, new ones emerge. Some will revolutionise speed, scale, and hyper-personalisation, while others will push ethical and privacy boundaries."
Mitra warned that AI could redefine advertising. "Customer research, content creation, and selling will merge into one seamless process. AI will predict consumer needs, generate personalised ads, and even interact as service agents. It’s happening now, and the real challenge is deciding where to draw the line,” he remarked.
Balancing innovation with responsibility
AI is advancing at an unprecedented pace, transforming industries in ways we’re only beginning to understand. While its potential is immense, its ethical use remains a key concern.
Guha highlighted the need to strike a balance between innovation and responsibility. “This isn’t a zero-sum game, we don’t have to sacrifice innovation to be responsible. Progress happens when both move forward together,” he explained.
AI is transforming businesses in unprecedented ways, but the real challenge is ensuring its responsible use. Guha expressed, “Can we make AI unbiased, transparent, and trustworthy? The goal is to create amazing experiences without users worrying about the technology behind them.”
AI plays a significant role in gaming, from predictive modelling to fraud detection, but its use in advertising is where responsibility becomes critical. Chugh explained, “When we use AI for targeting, it must be personalised yet responsible. If ads exaggerate claims or mislead users, we push them toward unhealthy behaviours. AI should enhance experiences, not manipulate them.”
Chugh noted that while bad actors exist, ethical AI can help. “Gaming and AI are evolving fast, and we’re still learning. As tech advances, personalisation will improve, but responsible use is crucial.”
The AI crossroads
Artificial intelligence is redefining industries at an unprecedented pace, and marketing is no exception. As AI’s capabilities expand, so do the ethical concerns surrounding its use, particularly in advertising, where the potential for consumer harm is significant. This evolving landscape demands vigilance, responsibility, and proactive regulation.
Mitra highlights a fundamental issue: the misuse of AI in marketing can create short-term gains for shareholders at the long-term expense of consumer trust. "There are industries where the vulnerability or the bad application of AI can do temporary good for one's shareholders while causing long-term detriment to consumers," he explained. "And that is what we all in this hall need to worry about."
The advertising industry is already grappling with a credibility crisis. A 2022–23 global report by Gallup placed marketing and advertising among the least trusted professions, ranking third lowest overall.
Mitra sees this as a pressing concern: "We are out of it, and we should seriously worry about what a few bad actors can do. AI has the power to create extreme situations that could lead to a great loss of consumer trust, and this is where we need to be particularly careful about governance."
The role of self-regulation
Engle sees AI as an opportunity for self-regulatory bodies to take the lead. "Self-regulation can act more quickly than government regulators," she noted. "Bringing together industry expertise to craft workable rules that balance AI’s benefits with its risks is essential."
She points to ongoing efforts by the International Conference on Advances in Science and Technology's global think tank, which is researching consumer expectations and the potential uses of AI in advertising.
"Existing laws apply to AI, but the concern is how AI might supercharge fraud and invasion of privacy. Hyper-personalisation often touted as AI’s greatest advantage can also be its most dangerous weapon," she warned.
The risks are real. "An alcoholic seeking help shouldn't see alcohol ads, nor should an anorexic teen get harmful content. We've seen this on social media. AI needs self-regulation now we can act faster and smarter than laws alone,” Engle remarked.
Building a responsible AI future
As artificial intelligence continues to reshape industries, there is a need to balance innovation with ethical considerations.
Mitra, talking about the ethical dimensions, questioned whether AI-generated content should be labelled. He shared, "If AI enhances value for consumers, disclosure may not be needed. But if it shapes their perception of a brand, transparency becomes our responsibility."
Engle highlighted the complexities of AI’s role in advertising and warned that over-regulation could lead to unintended consequences. It's important to distinguish the different uses of AI here. If AI is used to create an ad, it’s likely not significant to consumers. In regulatory terms, it’s not ‘material’ meaning it doesn’t influence their decision to purchase or use the product. A good comparison is CGI in ads; we don’t disclose its use because it’s not deemed necessary. The same logic could apply to AI-generated ads,” she explained.
Engle cautioned against hasty AI regulations. “Labelling every AI-generated ad risks disclosure fatigue and the ‘implied truth effect,’ where unlabelled content seems more trustworthy—undermining transparency itself,” she revealed.
Engle expressed that on the back end, where AI is used for analytics, things become more complex. “Here, privacy concerns arise - what do consumers know about how their data is being collected and used? Many laws require consumer consent for targeted advertising, especially for minors. But as hyper-personalisation evolves, we need to strike a balance between its benefits and potential risks,” she said.
Engle voiced out that it's not right to prescribe a one-size-fits-all solution because it needs careful thought. She explained, “If consumers see warnings every time AI is used—similar to the overuse of cookie banners—they'll suffer from notification fatigue, mindlessly accepting or rejecting without real consideration. That defeats the purpose of transparency. We need more discussion and study to determine the right approach for AI’s role in backend analytics."
Given this ever-changing landscape, Chugh emphasised the role of self-regulation in ensuring responsible AI use. He shared, "Self-regulation is about deciding what is right and wrong. Are we adding value to the consumer, or are we just focused on short-term profit? AI is here to stay, and until laws catch up, it is up to us to take ethical judgment calls on what we disclose to consumers."
Guha echoed these sentiments, advocating for a risk-based approach to AI governance. "There needs to be common sense and customer responsibility in AI regulation. Certain high-risk applications, like AI in medical decision-making, absolutely must be removed due to the immense risks involved," he stated.
Looking at the broader economic potential, Guha positioned AI as a transformative force for India’s digital economy. "India needs to be an AI superpower. Research suggests AI will drive our digital economy to a trillion dollars, contributing 20% to our GDP. Our current GDP is 3.5 trillion, and AI can help us double that," he asserted.
Guha stressed the importance of collaboration between government bodies, industry leaders, and businesses to harness AI’s potential responsibly. "If we do it right, we are on the precipice of something great," he remarked.
Chugh noted that while AI could simplify complexities, ethical considerations must remain a priority. "Humans create complications, but AI has the potential to simplify many of them. The challenge is ensuring we stay on track using AI ethically and preventing bad actors from causing harm. If they overpower responsible users, we are waiting for disaster,” he stated.
Engle weighed in from a regulatory perspective, acknowledging AI’s efficiency while also raising concerns about the challenges it presents. She expressed, "AI can be a great asset in ad creation and self-regulation, but as a regulator, I worry about how it can make our jobs more difficult. If AI can generate millions of personalised ad versions, how do we track them all? Advertising has evolved from mass media to social media, and now with hyper-personalisation, ensuring legal, honest, and truthful advertising becomes exponentially harder."
Mitra stressed the importance of balancing technological advancements with regulatory oversight. "Like UPI leapfrogged financial backwardness, AI can do the same not just in speed and capability, but in responsibility. If we get this right, we can lead the world. With strong technological expertise, a self-regulatory framework, and unmatched diversity, India is well-positioned. We can merge hyper-personalisation with inclusivity, avoiding a one-size-fits-all marketing approach. However, regulation must keep pace without stifling progress. Laws will always lag behind AI. If we uphold core principles and self-regulate effectively, India can set the global benchmark in AI-driven transformation,” he signed off.