Posted by & filed under PPCChat.

During this week’s PPCChat session, host Julie F Bacchini sought PPCers views on changes in PPC testing in the last few years, how they approach testing and what they test most frequently – keyword or audience targeting, ad copy, images, etc

Q1: How do you think testing in PPC has changed in the last few years? Does it differ by platform?

Google Ads: going from ETAs to RSA has changed a lot since everything is dynamic. @duanebrown

Particularly in Google Ads, the ability to test ad copy has really been hampered with the switch to only RSAs. We can’t test headlines directly against each other for example like we used to. @NeptuneMoon

Isolating variables is really difficult now with so many all-in-one products. @JuliaVyse

I also really miss the data that I used to be able to take FROM Google Ads and apply elsewhere. @NeptuneMoon

Dynamic seems to be the name of the game in ad creative across most platforms now.And the reporting has become much more opaque. @robert_brady

@NeptuneMoon that’s so true! I used to test YT audiences in social, but everyone’s advice is broad broad broad, let the machine do it. @JuliaVyse

Well @JuliaVyse Google certainly wants us to just go broad and let the “AI” do the heavy lifting/figure it out… @NeptuneMoon

The platforms definitely don’t want practitioners going granular anymore. @robert_brady

I think the rhetorical change from Google is really interesting as it’s gone from “you can do all the math yourselves!” to “math is REALLY hard.” And I think we’re still reeling from a few years ago when that change occurred and we couldn’t keep using high school stats to be at the forefront of data methodologies. @ferkungamaboobo

@ferkungamaboobo I think along with what you’re saying we had the “you can’t ever know what we know so don’t even try” @NeptuneMoon

Maybe because I’m dealing with such tiny accounts, but the rhetoric I get is “alright, you’ve got how many variables running? no prob, can you show me the regressions you’re running?” @ferkungamaboobo

And the answer is invariably “lol I don’t have time to do linear regressions, much less the multi-model regressions that are necessary to do the math” @ferkungamaboobo

Well and all the platforms continue to act like accounts of all sizes perform the same, which is laughable. Account volume impacts performance much more now with all of the automation in play. @NeptuneMoon

The platforms are also trying to capture any excess surplus from the auction environment. Savvy advertisers were always able to produce great returns because they were better than average inside the bidding system. As Google particularly moves to almost all Smart Bidding they can scoop more of that surplus by mixing in lower quality inventory to higher quality and keep everyone just barely at their targets (and theoretical break-even points) @robert_brady

Like I remember when I ran a babymath analysis of copy – I found 48 variables then and I think I was only scratching the surface. Doing a single-tailed t-score was definitely not the right methodology. @ferkungamaboobo

To say the least about experiment design. @ferkungamaboobo

The singular biggest change has been the conversion from a demand capture platform to one where demand gen and demand capture blend. It’s a tectonic shift, and it’s in the best interest of growing the company. [Insert shareholder comment here. Advertisers have been forced to adapt accordingly.@teabeeshell

Q2: How are you approaching testing these days? Does it differ by platform?

it’s been really difficult, particularly with clients that don’t have in-platform conversions. Things change, or don’t work as expected, question mark. @JuliaVyse

These days I am recommending a lot more testing on the post-click side of the equation. It has absolutely always been important (hello former web designer here!) but with all the automation levelling out the opportunities on the pre-click side of the equation, testing what will lead to more conversions and then using that in the ad copy is more where I’m advising right now. @NeptuneMoon

Great point @JuliaVyse it is a lot harder to diagnose issues now that in yesteryear… @NeptuneMoon

Ad copy gets harder to do. With Google’s RSA, we try to use each ad to test a POV/copy angle. Then all the copy in one ad talks about that POV/copy angle. @duanebrown

A big shift in my thinking has been away from statistical significance. I feel like I can do better work ignoring the math and pointing at big-picture results. We’re up 20% YOY in KPI1, 40% period-over-period in KPI2 etc. Small, incremental changes in intermediate metrics don’t matter as much and it’s better to discuss those real results. @ferkungamaboobo

I think we are also now in an era where our testing is more directional than granular. And that is so weird! We have spent so many years leaning into all the data we had access to and all that is/has shifted. @NeptuneMoon

Isolating single variables is nearly impossible now. Testing has to be broader and more thematic but is still possible. Or, basically, what @NeptuneMoon aid above. @robert_brady

Agree with much of what’s been said here so far. For clients with large enough budgets (and sophistication), market holdout tests may be the answer. This is especially true as (most) platforms continue to obfuscate data in the name of “privacy” or “it doesn’t matter, trust us.” @teabeeshell

Q3: What do you test most frequently – keyword or audience targeting, ad copy, images, etc.?Are clients or stakeholders asking for any specific testing?

Lots of creative testing in terms of videos, and basically channel testing. Does this channel move the needle for recall, consideration, and in-store sales in a particular region? it’s a slower process and riskier in terms of overlap with existing channels. @JuliaVyse

In Google Ads, ad copy is most tested. And I am providing more on competitive landscape and landing page experience. @NeptuneMoon

Ad copy testing is still impactful. It’s the first place where you begin to differentiate yourself from competitors and set the tone/terms of the future relationship. @robert_brady

Agreed with Julia: strategic testing. Then UX on landing page – copy especially. Research shows that basic audience groups are less than 50% accurate – what real inferences are you getting from that. Keywords don’t really provide a null hypothesis. Ad copy frankly doesn’t move the needle enough – what’s .05% more CTR really getting you? @ferkungamaboobo

And tbh that strategy can include a bunch of other things like keyword choice, like copy tack – but I think the “test one element only” was not a useful paradigm. @ferkungamaboobo

@ferkungamaboobo I think with ad copy, I like to test what resonates more, for sure. But what I really want to keep up with is how competitive are my client’s ads relative to the other ads a searcher is likely to see. @NeptuneMoon

Absolutely targeting first, creative 2nd. Since you can’t isolate individual creative elements, it makes little sense to spend time guessing what “works” on that front. Find audiences that deliver results within KPIs. Put forth quality creative. Flex with seasonality, updated assets, and to an extent, the audiences themselves. We’re far beyond the days of “Does this headline hit or miss ” @teabeeshell

I also have been harping on YouTube. Clients need to get this platform in order to succeed (through Google Ads, organic video, building awareness, etc.) looking ahead. @teabeeshell

In Google Ads I started testing a lot, some of tests that I always run are: @slobodanjelisavac

  • Broad match keywords
  • Broad match keywords with audience segmentation
  • Smart Bidding for competitor (like every single time, I have request by clients to test manual bid vs target impression share vs max conv. vs anything…. and usually I have continues A/B test just for this category)
  • RSA – I usually found that If I keep same controlled (pinned) headlines for RSAs I only test Description ad copy

Q4: What are your biggest testing frustrations? Does it vary by platform?

I think my frustration lately comes more from the lack of understanding of experienced advertising clients about how significantly things have changed in the world of PPC. A lot of taking a deep breath and taking a step back to do education on “how it all works in 2024” and helping clients let their now antiquated ideas go. @NeptuneMoon

Probably similar to many, but I miss the ability to say, “X caused Y with Z% confidence.” The decoupling of keywords, creative, and audiences from hard performance data is particularly frustrating.It forces us to zoom out, which is healthy at times, but it reduces the ability to pivot, double down, or change directions with more micro efforts. @teabeeshell

We should definitely mention how Google used headline text in descriptions right? @robert_brady https://searchengineland.com/google-testing-headlines-ad-copy-description-text-live-ads-435872

Even in areas you think you have influence, you may not. @robert_brady

Great point @robert_brady  – Google is doing all kinds of stuff all the time that we don’t know about.  Constantly testing things like this. It is almost impossible to keep up. @NeptuneMoon

I think Anthony Higman posted about more shennaigans in local ads where advertiser’s choices are being completely ignored. Is this a test? We don’t know. https://x.com/AnthonyHigman/status/1746990340391797086?s=20 @NeptuneMoon

@robert_brady at some point, though, there has to be a departure from the concept that the subjective human mind is a better (instant) judge of conversion potential than machine learningI understand and appreciate the need for control. (I still want it!) But for brands where that matters less, ML beats the human mind far more often than it doesn’t. @teabeeshell

I swing wildly in a few directions. The “ad strength” on various platforms is a baffling metric and makes it hard to do anything. The lack of ability for the platforms to make normal marketing decisions align with platform choices is maybe the biggest frustration. @ferkungamaboobo

(Obviously, there are areas where control matters greatly, i.e. legal, NPOs, and certain brand accuracy needs) @teabeeshell

@teabeeshell At scale I agree with you. For small to mid-size advertisers without the needed budget to produce enough data flow for the algorithms, the intuition of the advertiser and business owner/employee can do better than low-volume machine learning. @robert_brady

Just not being able to give definitive answers and even designing tests. It’s so much more vague now and there is so much room for unplanned results. Very hard to build a strategy if you don’t know why something works/doesn’t work. @JuliaVyse

“I want to test if my ads focusing on benefits worked better or if my ads focusing on brand name worked better” OK, there’s literally no way to get that answer – and I THINK that’s something that an automated NLP analysis could be very valuable! @ferkungamaboobo

@ferkungamaboobo most higher up Google reps tell you to ignore this alert on-platform. Ad strength has no impact on Quality Score or auction strength. @teabeeshell

BUT ALSO I think we spend a LOT of time testing things that are so far away from best practices that it’s baffling what we’re even doing. I talk about benefits there — the amount of feature-focused copy is laughable, and we create it in part to meet the requirements of the platforms because we have our one good ad already. @ferkungamaboobo

@robert_brady I sort of agree in theory, but given the sheer volume of learning data Google has at its disposal, I still think it makes better calls after, say, 100 impressions than any human can. It’s leaning into the law of large numbers, to a degree.@teabeeshell

Again, I am going to mention competitiveness. This is going to be even more important as we move into this new era. With Google making so many decisions about what gets shown to whom and when we need to be surer than ever that our ads are compelling and competitive! It is amazing to me how many businesses just completely ignore this. As if they truly believe the only thought process that happens for a searcher is to look ONLY at their ad and say yea or nay. Mind-blowing. @NeptuneMoon

I like these kinds of discussions. @robert_brady

@teabeeshell I think Google makes decisions (declares winners) way too fast. They will do it within days. Not even let a full week go by. It zeroes in on behaviours that happened on Monday and Tuesday let’s say. What if behaviours are quite different on Friday and Saturday? Google may have already decided before getting that data. That, to me, is a problem. @NeptuneMoon

@NeptuneMoon I no longer see time that something is live (or even click/impression counts) as valid measuring sticks in this age. With auction analysis happening in fractions of a second, plus a mountain of historical data in Google’s quiver, something works, or it doesn’t. It’s competitive, or it’s not.I think there are too many scapegoats to surface before brands have truly exhausted audience, creative, bid, and LP testing. @teabeeshell

I think Google knows  A LOT. But they don’t know everything. Particularly about specific businesses. I wish there were more inputs where advertisers could at least tell the machines important things, such as “we consistently do more business on weekend vs. weekday” so the machines could benefit from knowledge the business has rather than starting in the dark with only their data to go on. A collaborative effort would be so much better and stronger (and give G even more data!). @NeptuneMoon

@NeptuneMoon, spot on right there! @teabeeshell

Q5: Are there things you want or plan to test in the first half of 2024?

I want to give YouTube a fair swing. Somehow in 2024 it’s still a slept-on search of inventory. It’s as worthy an investment (with enough market penetration) as any other “TOF” effort.I have a working model for how to move the needle for both online and in-store purchases, predicated on X penetration over Y interval of time. Eager to launch that and prove/disprove its validity. @teabeeshell

We’re introducing Snap to a client who hasn’t tried it yet. ComScore (I know I know) shows a strong percentage of unduped audience available from IG, so we’re feeling good about recall scores and sales. @JuliaVyse

I keep saying that I am going to do more PMax, but it just isn’t a great fit for clients I generally work with. I want to get better versed in it, but can’t recommend it solely for that reason…@NeptuneMoon

TikTok probably should not be slept on either… @NeptuneMoon https://www.searchenginejournal.com/tiktok-gains-traction-as-a-search-engine-among-gen-z-study/505633/

@teabeeshell Keep us updated on your YouTube efforts. Tons of good inventory, but mountains of worthless junk too. @robert_brady

I have some decent experience with YouTube (and would suggest pairing that with other OTT) – happy to chat/collab on it. @ferkungamaboobo

@robert_brady Channel targeting. It’s limited, needs to be stacked significantly for scale, but it’s safest. @teabeeshell

Yeah 100% – only target by channel, choose your channels specifically. @ferkungamaboobo

There’s no way to do audience targeting effectively on YouTube given the use cases of the platform – you will ALWAYS get the wrong user in a household. @ferkungamaboobo

@ferkungamaboobo not unlike linear and/or CTV. It’s messy, always has been, and there’s no way around that. Geo-based lift studies are going to be the only way to vet whether the investment was ROI-positive. Even with that, proving new customer “incrementality” will be almost impossible. At least with YouTube, you can fail smaller. @teabeeshell

VERY much like TV. I think folks forget there’s, oh, 70 years of making good TV ads to pull from when they do youtube/ott/ctv creative.@ferkungamaboobo

@ferkungamaboobo Our house is a perfect example of YT targeting limitations (which are not their fault). Kid is 11. All her devices use my logins – Google and Apple. So they think a middle-aged lady is viewing ads, but surprise! At least half the time it is a tween. @NeptuneMoon

100%! I see it all the time. 80% of the spend is on kids programming and gaming programming, 19.9% is on “influencer” stuff and MFA channels, the rest is maybe in-market. @ferkungamaboobo

PPCChat Participants

Related Links

Stop wasted ad spend with Karooya

Stop the wasted ad spend. Get more conversions from the same ad budget.

Our customers save over $16 Million per year on Google and Amazon Ads.

Leave a Reply

Your email address will not be published.