All resources

How a Leading Fintech Cut Through One of the Most Crowded Categories to Lift AI Visibility 33%

Delia Rowland

March 6, 2026

5

minutes read

Case study

The Challenge

Treasury and cash management is one of the most saturated categories in B2B finance. 

The global treasury management software market is valued at roughly $6.9 billion, projected to nearly double over the next decade, with a competitive field that spans legacy enterprise players with a growing wave of fintech challengers — all fighting for the same buyer attention. 

A leading fintech platform was already showing up in AI-generated answers for its core product use cases. But for treasury-specific queries, high-intent prompts it was underrepresented relative to the category's importance to its business. 

The visibility gap wasn't a brand problem, but a distribution problem — and in a market this dense, closing it required a strategy for the sources LLMs actually pull from.

The question: Could targeted placements in the right sources shift that?

The Hypothesis

LLMs don't invent recommendations. They pull from sources they've indexed like third-party listicles, roundups, and comparison articles that function like a dynamic knowledge graph.

If the right mentions could be placed in the sources LLMs actually cite for treasury prompts, visibility for that cluster should follow.

The Approach

The company partnered with Noble over a one-month pilot. Noble placed the client in 9 targeted articles, each selected based on relevance to the treasury prompt cluster and a track record of appearing in LLM citations.

Each article was tagged to a primary LLM target: ChatGPT, Google AI Overviews, or Perplexity.

Visibility was tracked daily using Profound, which measures share-of-voice across a defined prompt cluster (i.e., the percentage of AI answers that surface a given brand on any given day).

The Results

Overall visibility

Across all tracked treasury and cash management prompts:

  • Before (Oct 23–31): visibility score ~0.054
  • After (Jan 10–31): visibility score ~0.072
  • Change: +33% relative increase

That lift accumulated steadily as mentions went live. Not as a single jump, but as a compounding effect across the pilot.

Where it actually came from

Of the 9 mentions placed, 7 had enough post-indexing data to analyze. Across those 7, average visibility in the week after indexing was +19% higher than the week before with each mention contributing incrementally to the total lift.

Two articles produced the largest individual jumps:

Mention LLM Target Local Uplift
Top Treasury Management Systems Google AI Overviews +47%
The Best Banks for Small Businesses in 2025 Google AI Overviews +43%

Individual placements ranged in impact from single-digit to +47% local uplift. The highest-performing articles were high-fit, high-authority pages that already had multiple LLM citations, and they produced the sharpest visibility jumps in the exact week after their expected indexing dates.

That said, in a category this crowded, the gap between a placement that moves the needle and one that doesn't can be hard to predict. What broader coverage gives you is more shots on goal. 

More chances to land on the pages that produce outsized impact. 

LLM targeting changed the outcome

Grouping mentions by primary LLM target:

LLM Mentions Average Local Uplift
Google AI Overviews 3 +31%
Perplexity 1 +29%
ChatGPT 3 +6%

ChatGPT-targeted placements underperformed for this prompt cluster. The likely reason is that treasury queries are search-adjacent, and Google AI Overviews and Perplexity draw more heavily from indexed web content when generating answers.

The implication isn't that ChatGPT doesn't matter, but that LLM targeting needs to be calibrated to the query type, and that calibration is part of what drives results.

Correlation with active mentions

As mentions accumulated and were indexed (assuming a 3-day lag from publication), visibility moved up in tandem. Correlation between active indexed mentions and daily visibility: 0.36 — moderate, positive, and consistent.

The Timeline

Oct 23–31: No mentions indexed. Baseline visibility ~0.054.

Nov–Dec: First articles go live. Visibility begins climbing. The two anchor articles produce +43–47% local lifts in their indexing windows.

Jan 10–31: Majority of mentions matured. Visibility holds at ~0.072. The effect sustained rather than faded.

What This Means

First: coverage compounds.

The 33% aggregate lift wasn't one or two big wins. It was the cumulative result of 9 placements building on each other over time. Individual articles vary in impact, and that variance is hard to call ahead of time. The takeaway: casting a wider net across relevant sources gives you more chances for high-impact placements and a more durable overall lift. In a saturated category, that's what separates the brands that break through from the ones that don't.

Second: LLM targeting is a real variable.

Same brand, same queries, different LLM targets — the results split 5-to-1. For search-adjacent categories like treasury management, Google AI Overviews and Perplexity placements consistently outperformed ChatGPT. That's worth building into your approach from day one.

This pilot shows that in even the most competitive categories, consistent, broad coverage across the right sources is what compounds into meaningful, lasting visibility gains.

Key Takeaways

  1. Crowded categories are winnable. Even in a $6.9B market with established incumbents and a dozen fintech challengers, a defined, high-intent query set moved +33% over three months. You can make meaningful gains in specific categories without moving the whole needle at once.
  2. LLM targeting matters more than you think. For search-adjacent query types, Google AI Overviews and Perplexity outperformed ChatGPT 5-to-1. Getting this right matters.
  3. The effect compounds and holds. Visibility stepped up as mentions accumulated and stayed elevated. The right placements don't just create a momentary lift — they shift the baseline.

Methodology Notes

  • Measurement period: October 23, 2025 – February 4, 2026 (100 daily observations)
  • Indexing assumption: 3-day lag from publication to LLM incorporation; 4- and 5-day lags tested as sensitivity checks with consistent directional results
  • Visibility metric: Daily share-of-voice score via Profound (treasury/cash management prompt cluster)
  • Limitations: Small sample (9 mentions, 7 with full pre/post windows), no control group, overlapping pre/post windows across mentions, 2 most recent mentions excluded from per-mention analysis

For the full technical methodology and raw data, contact Noble.

Let’s win AI search

GET DISCOVERED ON