10 Things to Consider When Designing a Testing Strategy

by James on December 22, 2008

research-results-smallBeth Harte, who blogs about marketing in The Harte of Marketing, who I respect immensely, recently wrote about Social Media rulesin her post,  “Who Made the Social Media Rules?.” She made the statement,

“As marketers it’s our nature to test, test again and re-test…and to push the limits.”

Unfortunately my experience has been the opposite.  I find most marketers, unless they have a background in Direct Marketing, don’t know how to or frankly don’t want to test effectively. Too much risk to their careers and/or the organizations they work for don’t tolerate, never mind find value in failure.

Consider the company you work for. Do they put 10% to 15% of their marketing budget aside for testing? Do you have a documented testing strategy? I bet they don’t.

If you decide to be the exception consider the following 10 rules will help you develop a testing strategy. They may seem obvious but, based on frequently observing their absence and as often seeing them being broken, you will be well served if you print them out and refer to them every time you consider a test.

Hip Shots

  1. Test a single element / approach—be sure you have isolated the thing you are testing or you won’t know whether or not it’s the cause of any change you observe.
  2. Test big things—blue versus green is rarely the answer to your question.
  3. Record test results—obvious I know but record your results so you, or a subsequent manager, can compare future efforts against the current test.
  4. “Beat the Champ”—don’t fall victim to Board Room Boredom. Whatever is and has been successful is the Champ to beat not the approach to abandon.
  5. Results must be statistically valid—determine what quantity is required for statistical validity and work backwards to determine the size of each test cell.
  6. Analyze carefully—even a successful test will have further learning in the details if you are prepared to look.
  7. Test for yourself—don’t take past performance as absolute. Run your own tests and draw your own conclusions.
  8. Results aren’t forever—markets change, economies change, media change, even consumers change.
  9. Avoid over-testing—test something, then run a confirmation test, then roll it out. Analysis paralysis isn’t pretty, or productive.
  10. Don’t test what you can’t roll out—start at the end and work backward to the test. If the end is a valid business then you will be able to roll out a successful test.

{ 1 comment… read it below or add one }

Beth Harte January 3, 2009 at 4:34 pm

Hi James,

Happy New Year! Got caught up with the holidays, so I apologize for not commenting sooner :)

Your hip shots are spot on! I think I had mentioned that I was in the tech industry, which might be why we tested so much. I am curious as to why/what industries are notorious for not testing.

Love this hip shot description: “Analysis paralysis isn’t pretty, or productive.” So very true!

Leave a Comment