The AI Perplexity: To ad or not to ad

As brand integrations in the AI space and ad-fuelled chatbots become a reality, we decode whether advertising and AI can truly coexist.

Anupama Sajeet

Mar 24, 2026, 10:27 am

Rubeena Singh (left) and Hridaye A Nagpal

A quiet battle is unfolding in the AI arena, with companies taking sharply divergent monetisation paths in a bid to outdo one another. Perplexity - among the first experimenters with brand integration in the AI space - is now stepping back to ‘safeguard user trust,’ while OpenAI’s ChatGPT explores ad-led revenue to offset the costs of running LLMs. As fault lines deepen, another major player, Anthropic, took a pointed swipe at ad-fuelled chatbots, even as tech giants like Google and Meta double down on ad-driven models.

The divide raises a critical question: Will AI become the next dominant ad playground, or will commercialisation undermine the very trust and neutrality that underpins its adoption? We weigh in with Rubeena Singh, managing director, NP Digital India & Hridaye A Nagpal, filmmaker and content strategy lead, Invideo, on whether advertising and AI can truly coexist.

Do you see conversational AI platforms evolving into the next major advertising channel, comparable to search or social media platforms, even disrupting search advertising’s dominance?

Hridaye Nagpal (HN): Advertising always follows attention, and attention is shifting toward conversational AI. But calling it ‘the next search’ misses what’s different. Search had a clear separation between paid and organic. Conversational AI gives you one answer that sounds like a trusted friend. That’s a completely different surface for ads. Will it become a major channel? Almost certainly. OpenAI is already charging advertisers three times what Meta does. The money is moving. But whether it disrupts search depends on one thing: can these platforms show ads without destroying the user’s belief that the answer they’re getting is honest? Perplexity tried it and walked away entirely. That’s a warning the whole industry should take seriously.

Rubeena Singh (RS): AI platforms will not be a straight ‘search-killer’. Instead, they will grow into a major decision-stage channel that sits on top of search and social rather than replacing them. Conversational AI shifts discovery from the traditional ‘10 blue links’ to single, synthesised answers. At the moment, ChatGPT ad products are impression-based and invite-only, with premium CPMs around 60 USD. That means, this is roughly two to three times the typical Meta CPM ranges, which makes them more suited to influence and learning rather than hard performance. Whereas, current LLM traffic generally shows lower direct conversion density, as it is usually more upper-funnel than classic keyword-driven search. This, therefore, limits how much performance budget advertisers will comfortably shift here in the near term. However, the jury on the long-term is out.

What unique advantages and opportunities does AI advertising offer advertisers and publishers versus traditional digital media ecosystems?

HN: AI advertising offers something no channel has before it. A brand recommendation inside what feels like a personal conversation, in the exact moment someone is thinking or deciding. That’s powerful.

RS: The shift toward conversational interfaces offers several distinct advantages that move beyond the limitations of traditional digital ecosystems. In a chat, the ad can be a continuation of the solution. The brand is seen as a product or service that directly solves the expressed problem, rather than a visual interruption. Targeting can leverage real-time conversational context, including the specific problem, constraints, and preferences. This is far more powerful than proxy signals like broad interests or keywords, especially for complex and high-consideration categories. Having said that, current LLM usage is still skewed toward exploration and ideation. Even with better intent signals, you are often influencing upstream consideration rather than harvesting ready-to-buy demand.

What does Perplexity AI’s move signal about consumer expectations from AI platforms? Is an ad-supported AI model inherently at odds with user trust, or can transparency solve the issue?

HN: Perplexity’s retreat says everything. They tried ads, ran the experiments, and walked away because it broke something fundamental. Users need to believe they’re getting the best answer, not the best-paying one. Can transparency fix it? Not in this medium. Transparency works when you can visually separate content from commerce. Google has a sponsored tag you scroll past. Instagram has ad labels. But conversational AI gives you one answer in a personal, authoritative tone. A sponsored label doesn’t undo that. You’ll always wonder who that answer was really serving.

RS: Transparency is necessary. The platform must structurally protect content integrity. Users expect AI assistants to behave like objective, citation-backed advisers. Any monetisation must work around that expectation, not against it. An ad-supported model is not inherently incompatible with trust, but it is extremely sensitive to design. Sponsored content must be clearly separated, labelled, and non-intrusive. If a sponsored suggestion helps the user execute the answer, such as booking, buying, or implementing, it feels like a feature. If it bends the answer, it immediately feels like manipulation.

If ad integration compromises perceived neutrality of AI responses, how damaging could that be for brands?

HN: Here’s the paradox brands should worry about. The intimacy that makes AI advertising appealing is exactly what makes it risky. When a brand shows up in a search result, there’s distance. It’s an ad, you know it, you evaluate it. But when a brand shows up inside what feels like trusted personal advice, and that advice turns out to be commercially motivated, the backlash doesn’t just hit the platform. It hits the brand. If I’m asking an AI to help me choose a camera for a shoot and it steers me toward a brand because they paid for that placement, I don’t just lose trust in the AI. I lose trust in that brand. They went from being a genuine option to the company that tried to influence my decision through a channel I thought was neutral.

RS: Because AI responses feel more authoritative and one-to-one than a regular standard search results page, any perceived bias can damage brand trust faster than a bad search ad placement. Key risks include:

•    Perceived “pay-to-play” answers: If a brand appears in queries without organic authority to back it up, users may see it as manipulation rather than being the best solution.
•    Association with degraded neutrality: If platforms widely start using overly commercial answers without transparency, brands over-indexed on paid AI placements could suffer ‘guilt by association’ and a long-term loss of credibility.

This is why it is smart to anchor strategy in organic authority and AEO, along with paid ads.

How can platforms maintain editorial integrity between organic responses and paid placements?

HN: The conversational interface is intimate by design. People ask AI things they wouldn’t Google. When I’m using AI to work through a creative problem, an ad in that moment isn’t just intrusive. It’s a breach of trust. But there’s a middle ground nobody’s exploring. What if the AI gives its honest answer first, then asks, “Would you like to see a sponsored suggestion?” That one pause changes everything. The answer stays clean. The user stays in control. Brands still get a way in, but only with permission.

RS: Conversational intimacy cuts both ways. The more the system knows about a user’s problem, the more an irrelevant or overly targeted ad feels like a privacy violation. To avoid this, platforms need guardrails where the organic answer must come first, followed by distinctly boxed and labelled sponsored options placed after the reasoning, not inside it. Organic results must be optimised for accuracy and completeness. Paid units should be framed explicitly as ‘ways to act on this answer,’ such as specific tools, services, or products to implement what was recommended. Platforms should exclude highly sensitive intents and enforce relevance thresholds so ads do not appear where they can harm credibility.

This article first appeared in the March issue of Manifest. Get your copy here.
 

Source: MANIFEST MEDIA

Subscribe

* indicates required