7 Things You Should Know About Testing

by James on May 18, 2009

Making decisions about your marketing strategy is hard. Most of the time you are guessing or, said more charitably, using your instincts. Well there is an alternative. You could be testing to see what the best course is. But testing is hard, you need to know math and stuff. Well, not really.

Here are 7 things you should know about testing. Use them as guides to develop an effective testing strategy for your marketing plan. Then you and your staff will know how to think about specific tests.

Hip Shots

  1. Test for the right reason. Don’t use testing as a surrogate for marketing strategy backbone:

    “I’m not sure which direction is best so let’s test them both.”

    This isn’t a good reason to test. First it suggests you have no opinion. If you don’t have an opinion then marketing isn’t your field. Testing is often used to avoid conflict. If you can’t handle conflict then marketing isn’t your field. Seeing a pattern here. Get some backbone and make a decision.

  2. Test to make it better. If you have a successful campaign or program don’t succumb to boardroom boredom and stop using what’s working. Your customers are not as bored with the campaign as you are. I guarantee it. But you should always try to beat your current marketing strategy. Testing to make a campaign better is a great reason to test.
  3. Test big things. Test things that make a difference. Blue versus green isn’t a big thing. Don’t test this, just decide which is best. A new medium or media channel is a big thing, so is a new offer, or a new format. Test big things like these to see if you can do even better.
  4. Test things in isolation. Be sure to isolate the element you are testing or, said another way, don’t test multiple things simultaneously. Unless you are a very sophisticated mathematician and thoroughly understand the subtleties of multi-variant testing and test design, you won’t know which element caused the success or failure of the test. Pick one big thing and test it.
  5. Test with the right sized sample. Don’t test a larger sample than you need. Determine the sample size you need for statistically validity. Work back up the response waterfall to determine the size of the test cell. For example, if you need 200 sales to have a statistically valid comparison then apply your conversion rate and response rate to 200 to determine how many need to be in the test cell. If your test cell is too large you are putting valuable marketing resources at risk needlessly. If it’s too small you will be making decisions based on sketchy data. Neither outcome gets you where you need to be.
  6. Test what you can afford. Test what can be rolled out to your total business. Don’t test an offer or a media channel you can’t afford to use. Guess what, it might work. But, what if it does? If you can’t afford to use it for the national campaign, don’t test it.
  7. Roll out your successes slowly. Your test was a success. Fantastic.

    “Lets cancel everything and run with the new program.”

    Not so fast. Testing isn’t a perfect science. Before you cancel everything, and join the New Coke Product Manager on the unemployment line, confirm that the test really reflects reality. Run a confirmation test. This is usually the same as the successful test but uses a larger sample. If your new marketing strategy continues to perform in the confirmation test then rolling it out is the smart thing to do.

So have at it, test for fun and profit.

{ 2 comments… read them below or add one }

JJ Gray May 18, 2009 at 9:53 am

This is good. I’ve actually experienced quite a few instances of people not understanding even Point #1. One other addition I might make there is not only to consider the “backbone” side of the equation, but also the larger “why.” What do you hope to learn from a test? Are you trying to beat a control? Attempting to gauge your customers’ preferred offer strategy or method of response? Essentially, how are you going to use this information down the road?

I’ve had several one-run campaigns where a client/colleague suggested a creative test as a fall-back position when faced with a difficult decision. They were then forced to reconsider their position (or lack thereof) when asked what they’re trying to learn and how they’re going to use it later.

As a creative, I’m not normally one to preach the virtues of science. But I do take a more scientific view when it comes to creative testing: know what you are trying to learn, how you’re going to measure the results, and why you give a rip in the first place.

James May 18, 2009 at 10:17 am

Great point JJ. Before embarking on a test ask yourself how you will use what you’ll learn.

Leave a Comment

CommentLuv Enabled