A/B testing is not just a best practice. It’s a culture, said Jim Pugh, CEO of ShareProgress, based in San Francisco. “It’s a commitment of running your program in a different way, measuring the effectiveness of your choices, making decisions using data,” he said.
Pugh gave a talk at Salsa Labs’ conference FUSE 2014 in Annandale, Va. He discussed creating a culture of testing at a nonprofit. It’s easy to get overwhelmed with choices, said Pugh. Which metrics do you measure? For email, “The two you want to pay attention to are action rate and unsubscribe rate,” he said. “What you care about is how many people you can get to engage in action.”
When thinking about the unsubscribe rate, Pugh said to remember that email isn’t actually free. “Every time you communicate with your list, people will decide to leave and there’s a cost there,” namely that you won’t be able to communicate with those people to engage them in future actions or donations, he said.
Testing is not as simple as sending two emails and seeing which does better, said Pugh. It’s necessary to understand statistical significance. “Depending on how much better (an email does) and how big a (sample size), your test might not tell you that much. There’s always random chance,” said Pugh. “Make sure your test groups are large enough for significant confidence at a certain level that the results tell you something real.”
Pugh recommends using a calculator like Abba (www.thumbtack.com/labs/abba) to determine if your test outcomes or projected outcomes will be the result of a well-designed test with a large enough audience or organic change that would have happened anyway. “You can plug in your audience size, sample size and results, and can use it to figure out the appropriate group sizes,” he said.
Look at the calculator result’s p-value, said Pugh. A p-value of 0.1 or less means upwards of a 90 percent chance that the results of the test are not random. A p-value of .05 means a 95 percent confidence rate.
“You need to balance the benefit of your actions versus the cost of unsubscribes,” said Pugh. “You’re weighing the value of your actions against the cost of losing people from your list.”
To experiment with unsubscribe rates, use a trial balloon instead of an A/B test. Send just one email to a randomized segment of your list. “Look at action rates and unsubscribe rates, and figure out an acceptable ratio,” said Pugh.
A culture of testing does not begin and end with email. “There can be a culture of testing in everything you do,” said Pugh. “With online ads, platforms usually have A/B testing features. You can even do it with offline actions. If you have volunteers making phone calls, have different groups with different scripts. As long as you’re able to measure, you can apply a culture of testing.”