What 10 years of PPC testing reveals about breaking best practices

PPC best practices playbook

PPC performance conversations often focus on best practices.

  • Account structures should be clean.
  • Match types controlled.
  • Budgets scaled gradually.
  • Campaigns should avoid overlap.
  • Everything should be logical, efficient, and easy to explain.

That foundation matters. It creates consistency and avoids obvious inefficiencies.

But it’s not where the biggest gains come from.

Looking back over the past 10 years, many of the most meaningful performance improvements didn’t come from refining those frameworks. They came from testing ideas that didn’t quite fit them — things that felt slightly uncomfortable but aligned with how platforms actually behave.

In practice, Google Ads and Meta don’t optimize toward best practice. They optimize toward signals. Once you think in those terms, your approach to performance changes.

Control still matters more than you think (SKAGs aren’t actually ‘dead’)

Single Keyword Ad Groups were widely written off as automation improved.

The narrative was simple: machine learning removed the need for granular control. Structure mattered less.

In practice, that wasn’t entirely true.

In several accounts, reintroducing SKAGs on a small subset of high-intent, high-revenue keywords led to immediate performance gains. Query matching tightened, ad relevance improved, and conversion rates increased.

This wasn’t about reverting to old structures across the board. It was about recognizing where precision still adds value.

The takeaway is more nuanced than “SKAGs work” or “SKAGs are dead.”

Control still matters, but only where intent justifies it.

Your customers search everywhere. Make sure your brand shows up.

The SEO toolkit you know, plus the AI visibility data you need.

Start Free Trial
Get started with

Semrush One Logo

Broad match works better when you aggressively constrain it

Broad match has always carried distrust.

Too much expansion. Too little control. Too much reliance on Google’s interpretation of intent.

In practice, one of the most effective setups combines broad match with aggressive negative keyword management.

Instead of restricting input, you let Google explore — then shape the output.

Search term mining becomes the control layer. You continuously remove irrelevant queries and reinforce valuable ones.

This creates a system where reach expands without fully sacrificing relevance.

The shift is how you apply control.

You don’t control broad match upfront. You control what it learns from.

Target Impression Share changes behavior when visibility matters more than efficiency

Target Impression Share is usually positioned as a defensive metric.

Used for brand campaigns. Used to protect coverage. Rarely used for growth.

Applying it to non-branded, high-value terms feels counterintuitive. You prioritize presence over efficiency.

In certain cases, that trade-off is worth it.

By pushing aggressive impression share targets on commercially important queries, you increase SERP dominance and reduce competitor visibility. Conversion volume increases, even if cost efficiency softens.

Here, bidding strategy becomes less about optimization and more about intent.

If your goal is to own a space — not just compete — efficiency can’t be the only metric.

Conversion tracking isn’t the issue. Conversion weighting is.

Most lead gen accounts correctly track multiple conversion actions.

Form fills, phone calls, email inquiries — all captured and visible.

The issue isn’t tracking. It’s interpretation.

When you treat every action equally, the platform has no reason to prioritize one over another.

In one account, assigning values based on the likelihood of becoming an MQL changed optimization almost immediately. Phone calls were weighted highest, email clicks lower, and generic form fills lower.

Switching to Maximize Conversion Value didn’t increase total conversions. It improved their quality.

The distinction matters.

The platform isn’t misoptimizing. It’s optimizing exactly what you tell it to.

Competitor bidding works because intent already exists

Competitor campaigns are often dismissed as inefficient.

Higher CPCs. Lower CTRs. Messier reporting.

All true.

They also deliver something most prospecting campaigns struggle to create: existing intent.

Users searching competitor brands are further along in the decision process. They’re not exploring the category — they’re choosing within it.

In multiple accounts, competitor campaigns consistently converted—not at the lowest cost, but with high commercial intent.

With clear positioning, strong differentiation, and relevant landing pages, they became a reliable strategic layer.

You’re not creating demand. You’re intercepting it.

Get the newsletter search marketers rely on.


Top-of-funnel keywords don’t convert. They still drive performance.

There’s a tendency to remove anything that doesn’t convert.

Informational queries. Early-stage searches. Broad, non-commercial terms.

Individually, they rarely justify spend.

Collectively, they influence the entire account.

Introducing top-of-funnel keywords not expected to convert improved lower-funnel performance. Remarketing pools strengthened, audience signals improved, and high-intent campaigns became more efficient.

The value is indirect but measurable.

This challenges a common assumption.

Not every keyword needs to convert. Some build the signal that drives conversion elsewhere.

Audience assumptions are often wrong. The data usually isn’t.

Audience targeting often starts with a clear hypothesis.

Who the customer is. What they care about. How they behave.

In one account, the data told a different story.

The “ideal” demographic underperformed, while adjacent audiences converted more efficiently.

Instead of forcing the account toward expectations, you follow the data. You shift budget, expand targeting, and improve performance.

This tension is common in PPC.

What you expect to work and what actually works aren’t always aligned.

Clean structure is often a management preference, not a performance driver

There’s a strong bias toward clean account structure.

No overlap. Clear segmentation. Tight control.

It makes accounts easier to manage, explain, and audit.

It doesn’t always make them more effective.

In several cases, allowing controlled overlap between campaigns, match types, and keyword themes improved coverage and performance. Instead of cannibalization, the system used overlapping signals to make better auction decisions.

This challenges a long-standing assumption.

Structure should support performance—not limit it.

Product feeds are strategic

In Shopping campaigns, the product feed is often treated as backend work.

Ensure accuracy. Ensure completeness. Then leave it.

But the feed directly shapes how products are interpreted and matched to queries.

Rewriting titles to prioritize high-intent keywords, reordering attributes based on performance, and testing naming variations all improved visibility and CTR.

These weren’t cosmetic changes.

They changed how the algorithm understood the product.

In Shopping, your feed is your targeting. Treating it as static limits performance.

Retargeting is one of the fastest ways to test what actually works

Retargeting is usually treated as the safest part of the funnel.

High intent. High conversion rates. Predictable performance.

That makes it an ideal testing environment.

Using retargeting audiences to test messaging, offers, and creative variations creates faster feedback loops and clearer results than prospecting.

You can then scale winning ideas with confidence.

This reframes retargeting.

It’s not just a conversion layer. It’s a testing environment.

See the complete picture of your search visibility.

Track, optimize, and win in Google and AI search from one platform.

Start Free Trial
Get started with

Semrush One Logo

How top accounts really win

After 10 years in PPC, the biggest shift isn’t the platforms — it’s how they actually work.

Best practices still matter. They’re the foundation.

But they’re not where advantage comes from.

The accounts that outperform understand how signals are interpreted, where systems can be influenced, and when to step outside what’s considered “correct.”

Because the goal was never to follow the playbook.

It was to outperform it.

https://searchengineland.com/ppc-testing-breaking-best-practices-473931