Posted by & filed under PPCChat.

Hosted by Julie F Bacchini, this week’s PPCChat session has shed light on PPC Testing & Experiments, Are they easier or harder to execute, What budget to request for a particular test/experiment, How to decide on things to test/experiment within the accounts and more.

 

Q1: Do you have regular testing/experiments built into your account processes currently? If so, what does it look like?

 

Not at the moment, but I often recommend them. light testing of different bid strategies over time, but not a fleshed out testing program. @JuliaVyse

Kind of – not always as formal as it should be. Keep track of ad copy based on labels & monitor performance aggregated by campaign in some cases. @timothyjjensen

Hell yeah to regular experiments! We’re always testing landing page splits for every account where we have control over landing pages. @CJSlattery

I check/create tests on a weekly basis, part of the process that I created. But there’s times that it needs to be checked daily and other times it’s just once a month. @mikecrimmins

I don’t have a formal program, but I want to develop some things that are more “ready to propose” to make things more efficient. @NeptuneMoon

We try to keep a process but it differs per client. Definitely refresh on ads to test – and on a campaign basis, mainly when warranted. Like if a campaign looks like it could do well on max conversions or CPA, we will test it. @marksubel

Definitely not as much as I should, but mostly a time constraint. @SEMFlem

Most of our clients have dedicated testing budgets — usually something along the lines of 70% current / 20% Experiments + Optimizations / 10% Moonshots + Crazy Ideas. @SamRuchlewicz

In between projects right now so no. But in my last company, there were many other things that unfortunately took priority than creating a testing plan. There was a basic time schedule one but wasn’t really adhered to. @mindswanppc

I know better, but I don’t currently have that setup. But, I am staring down the pipe of a bunch of new and mismanaged accounts that I am in the process of cleaning up. @jturnerpdx

We build them into the scope, usually 1 ad copy and 1 landing page test per fiscal year, but it varies wildly. @JonKagan

Yes, of course. We generally start with testing ads against each other and then we’ll mix in different bidding strategies etc into everything later on. @adwordsgirl

 

Q2: Do you find testing/experiments easier or harder to execute on different platforms? If so, in what ways?

 

Difficult to do true ad testing on Google anymore. No such thing as even rotation and ads can show in so many possible formats each time. @timothyjjensen

Each platform gives you more or less control depending on what you want to do. honestly easier v harder is down to the client and their site more than the platform, at least for me. @JuliaVyse

Google is probably the easiest place to test. Lots of data, some tools built right into the platform that makes testing easy. Bing, not as much data, etc. @mikecrimmins

I think testing has gotten more difficult all the way around with the rise of machine learning and its application in the platforms. They pick a “winning” ad variation almost from go a lot now, which is frustrating. @NeptuneMoon

I find it really hard on Facebook – as in my experience they find ads that work well and then it seems like they push everything there instead of rotating a bit more. Now with dynamic ads, trying to find a new process. @marksubel

Yes, I like taking the lazy man approach and want an easy carryover from one platform to the other @JonKagan

 

Q3: How do you position testing/experiments with clients to get them to say yes?

 

I talk about the risk of overinvesting and being complacent. If you keep throwing dollars at something without looking at options, other platforms, other opportunities, your competitors will out innovate you. @JuliaVyse

Most of my clients don’t care about the testing/experiments, as long as it leads to more leads. They’re more interested when we’re testing new platforms/channels or campaigns. @mikecrimmins

It’s right from the start. It’s in our website copy, it’s part of the sales process. I call it iterative improvement. I talk a great deal about our process of Hypothesize, test, analyze, and implement in proposals & calls. @CJSlattery

I’ve never had much trouble if I present a hypothesis that makes sense, and explain the potential impact of the experiment. @SEMFlem

When justifying a test in new platform or targeting technique, it’s helpful to have case studies to reference from other clients. Also this is a case where reps can be helpful in pulling data for your client’s industry. @timothyjjensen

I think it’s all about timing – propose it at the beginning of a new budget season (month/quarter) and propose it against performance that isn’t doing well. i.e. when there is fresh budget and the proposal is “we can get you X% more conversions” if we test this. @mindswanppc

I find it is helpful to figure out how risk averse the client is early on. Then you can better know what kinds of tests to propose. Also, I have found it helps if you do basic testing first and show results to get to more yes on bigger things. @NeptuneMoon

Depends on the type of testing. For Crazy Ideas/Moonshots, Usually framing it as an R&D initiative tends to work well, especially for clients with reasonable levels of budgeting/financial sophistication. The other one that works well is the “VC mindset.” /1 @SamRuchlewicz

I also will just make crap up because clients want estimations that won’t matter after the test is over and you explain the story of what happened well. @SEMFlem

I think it’s just a matter of finding out how much risk a client is okay with taking on and then working within those boundaries. As results come in, they’re more inclined to try new things. @adwordsgirl

I like to say “Sh*t changes, you won’t know if it works for you until you give the old college try”. and then I am told I have to speak to HR…. @JonKagan

 

Q4: How do you determine what budget to request for a particular test/experiment (if you’re doing something that will require designated budget)?

 

This is where (at least I find) having someone with a background in statistics + probabilities is helpful. Once you know the sample you need to be confident in an outcome, work backwards to the cost. @SamRuchlewicz

The other key item is determining the “novelty” period — a lot of times, a test will show (what appear to be) statistically significant results, only to revert to the mean once the novelty of the shiny new thing wears off. Make sure the gains persist! @SamRuchlewicz

I try to back into budget based on the conversions necessary for statistical significance. “This is the size of the data set we will likely need for good analysis, so we will need X budget to get there.” @CJSlattery

Budget depends on the test. A lot of the time, it’s 50% if it’s going to be an A/B test like at the campaign level. Other time it’s a smaller part of the overall budget if it’s a new channel, goal, etc. @mikecrimmins

Educated and estimated guess (from experience) with some data to create the estimations, either within the account or through Google keyword tool or Facebook audience estimations. @marksubel

I tend to show what the budget is, why we need it for a statistically significant test and take it from there. If they don’t want to invest in it, we find a test that works for them. @JuliaVyse

We try and reserve 5% of the budget for testing as a default. @JonKagan

 

Q5: Do you have any hard and fast rules for your test/experiments, such as how long they should run, number of things being tested, etc.?

 

Good to use a statistical significance calculator like cardinalpath.com/resources/tool… to determine if you have enough data to make a judgment call. @timothyjjensen

My only hard and fast rule is min 3 weeks. other variables relate to different clients and industries, so we can’t be hard and fast per se. @JuliaVyse

I usually say if a keyword (for example) has 30 clicks and no conversions it’s time to decrease bid or even pause. I like to discuss these thresholds with clients so we’re all on the same page. @Finding Amanda

A/B testing shouldn’t really be thought of as truly statistically significant because those calculators aren’t taking into account a lot of the stuff you need to know to _actually_ calculate statistical significance. Treat it all as directional and hints. @ferkungamaboobo

Depends what’s being tested. BTW, I think the “95% statistical confidence” thing makes no sense in marketing – and I say that as someone who studied statistics and was taught 95% confidence. @stevegibsonppc

I mean, there’s a lot that goes into this: what is the distribution? what’s the data scale (category v. continuous)? How many variables? Over what period? This is where it’s helpful to have someone with a background in stats helping you. @SamRuchlewicz

Woo boy. There’s so much that goes into proper statistics. As Tim mentions, significance calculators are a good start but not always ideal. There are some great into stat courses online that people can take if they want to go deep on this. @CJSlattery

This is very much about how much data you have. The less the data the longer the time period that the test need to run for. Never a hard and fast rule. And statistical significant check – there are plenty on Google. @mindswanppc

I’d like at least a month, but depending on the budget and the industry that could differ greatly. @adwordsgirl

Always a min of 3 weeks or hit a 90% confidence level. After that it varies case by case. But biggest rule, don’t provide client data at the end of the first day! @JonKagan

 

Q6: Is there a test/experiment you’re running or have recently run that surprised you?

 

We did competitor bidding, where we misspelled the competitor name in the ad to get around trademark violations, and it shockingly still performed well @JonKagan

 

Q7: How do you decide on things to test/experiment with in your accounts?

 

This is such a great question! It depends (as usual) but I like to test messages as often as possible. @JuliaVyse

I decide what to test based on accounts with issues that need to be fixed, things I’ve picked up from our PPC people, or hunches I want to test out. How’s that for a broad answer. @mikecrimmins

It must make a big impact if there’s a winner, or else not worth the investment. @SEMFlem

I like to run tests that challenge the “this is how we’ve always done it” whether that is messaging, offer, CTA, etc. I saw a great tweet the other day which said if biz had tombstones, most would say “they perished doing what they always did”… @NeptuneMoon

We get to do 1 of each: A flashy fun one, and one that will possibly give us a return. @JonKagan

For social – start with testing creative and move to testing copy once you’ve found what imagery resonates best. @timothyjjensen

For Facebook, nail down messaging with a broader audience –> narrow audience down/introduce new audiences –> try to see if messaging could be improved. For Google, we’ll test ads as best as we can and then mix in bidding strategies etc. @adwordsgirl

Generally whatever has the highest positive expected outcome. But, if it’s a new client, I tend to start with something that’s high probability of success. @stevegibsonppc

 

Q8: What is the biggest objection or resistance you encounter when you suggest tests/experiments in client accounts?

 

Will this work/are we wasting our money? most of the time this is the big concern from my clients no matter the vertical. @JuliaVyse

“We tried that and it didn’t work” @timothyjjensen

Budget…client’s that don’t want to spend more, even if there’s ROI @mikecrimmins

I don’t deal a lot with rejection as much as apathy. I have a great idea that requires the client to do something and they just don’t see the value of getting around to it. @SEMFlem

And that they can’t expect a test/experiment/moonshot to out-perform a tried-and-true channel. It’s just not going to happen with any level of consistency or regularity. @SamRuchlewicz

 

Q9: What do you wish clients understood better about tests/experiments?

 

It’s just that – an experiment. Some tests will fail, but it’s still a learning experience. @timothyjjensen

Both that (a) it’s essential if you want to get the best possible result and (b) that it’s uncertain. It’s basically R&D for your marketing budget. It’s an investment in the future of your organization. @SamRuchlewicz

That they’re called TESTS for a reason. A successful outcome is not pre-determined. Not all tests produce positive results. Don’t try to bail too early. Wait for the appropriate amount of data to come in. @CJSlattery

What’s that Wayne Gretzky quote? Oh yeah, “You miss 100% of the shots you don’t take”. @NeptuneMoon

The results can be residual…as in it will improve next month’s results, and the month after that, and the month.. @SEMFlem

On the flip side, it can look great at the start and end up being a loser. That’s how these things work. Nothing was “done” to it to ruin it. @CJSlattery

I think you all have talked me into when we try new channels or verticals, calling it a test. @mikecrimmins

Also, that it’s 100% OK for the outcome of a test to be “this doesn’t work” or “what we’re currently doing works better today.” And if it’s the latter, then it is OK (and advisable) for you to run the same test in 6/12/18 months! People change! @SamRuchlewicz

(in full disclosure, one of the things that drives me absolutely mad is when a client says, “Well, our old agency ran that test in 2015 & it didn’t work, so we don’t want to do it again.” Because people in 2019/2020 DEFINITELY behave the same way they did in 2015…)  @SamRuchlewicz

That these are experiments, we are there to learn. They are not the final gospel! @JonKagan

 

Q10: Is there anything the platforms could do to make tests/experiments easier to implement?

 

Would love for @LinkedInMktg to implement an experiment feature like Google/MS/FB has. Oh, and actual data on how responsive ad elements perform in Google. @timothyjjensen

I wish there were native ways to split test landing pages better in the platforms. It shouldn’t require an external tool or service for that function. @NeptuneMoon

Give us testing systems that don’t require a PhD to understand how to setup. @JonKagan

 

PPCChat Participants:

 

Related Links:

 

Stop wasted ad spend with Karooya

Stop the wasted ad spend. Get more conversions from the same ad budget.

Our customers save over $16 Million per year on Google and Amazon Ads.

Leave a Reply

Your email address will not be published.