Almost any marketer who has ever pushed the “send” button on an email understands how powerful testing can be in improving results. Sometimes we don’t test when time is short and resources scarce, but we all know we should be testing. That’s why I often get frustrated when someone takes the time to construct a test but makes a fundamental mistake that can lead to errors.
One area of testing where I often see mistakes is in the selection of the number of recipients that will be in a test group. Sometimes the error is pulling too many records, and sometimes it’s pulling too few. Too many wastes the opportunity to reach out to the majority of the list with the better design. Too few, and our results many not mean as much as we think they do.
So, how do we determine how many addresses we should put into our test? Statistics! Simply plug in the numbers, and well-established formulas tell you how many recipients you need in your test. No need to dive into the math here, since there are many online calculators that can help. A couple I often use are:
Let’s take a look at an example in which I want to test how two different subject lines will impact open rates, given the following parameters:
- My list has 3 million recipients.
- I would like to be 90 percent confident that a difference of at least 1 percent between the two open rates is statistically significant.
In other words, I need to be relatively sure that the 2 percent difference in open rates between message A (open rate: 32 percent) and message B (open rate: 30 percent) does in fact indicate a winning subject line. Using a sample size calculator, I find that I will need a test list for each message of about 6,800 recipients.
Easy enough, but what about clicks? Here is where I frequently see testing errors. To test how message design affects clicks, for example, you need to determine how many people actually view the design, not how many people receive the message. Using the example above, if I’m testing how two different message designs affect clicks, I need about 6,800 people to view each message. That means I need to factor an expected open rate into how many records I pull from my original list. If I think I will get a 20 percent open rate, I’ll need to pull 34,000 addresses for each test segment:
- 34,000 Sent Messages x 20% Open Rate = 6,800 Opens
Often, we don’t have lists large enough to accommodate test segments of this size. In these cases, we may have to accept more uncertainty in our results and should repeat the test a number of times to get a better indication of whether we have a true winner or not.
Hopefully I haven’t scared you away from testing. Email is a wonderful channel for experimentation, but when we test we do need to carefully construct the test and correctly interpret the results. There are many aspects of testing worth considering, like what to test or how long to let a test run. I’ll be examining more of these in the future. Till then, keep learning and have fun.
I’m very interested in your feedback, article suggestions, and any comments or questions on this topic or others you may have on email marketing—please post your thoughts below or email me at SilverpopStrategyConsulting@silverpop.com.