Creating an Experiment

Knowledge Level: 
Time: 10 minutes
Suggested Skills: Statistics
Objective: Experiments are a great way to measure responses to variable content. With Experiments, you can answer questions like "Which subject line garners more opens?", "Which email copy/design garners more total clicks or specific clicks to a particular link?". While it may take some time to set up an experiment in the short-term, you can learn some invaluable information that will benefit your digital marketing efforts in the long-term. This article outlines some basic statistics to keep in mind. To learn how to set up a specific experiment, select the type of experiment you'd like to create near the bottom of this article.

Before we review the steps to create your experiment, let's revisit your college statistics classroom. After all, whichever experiments you run, you want them to be statistically significant, right? Here are a few terms to understand before we begin:

Statistically significant: The likelihood that a result or relationship is caused by something other than mere random chance.

Confidence level: How sure you can be the results are accurate.

Confidence interval/margin of error: a number that represents how large the deviation can be if you were to sample the entire population. For example, using the parameters from the table below, if 56% of your sample group opens Version B email, you can be 99% certain that between 54% and 58% of the population would open the same Version B if they were sent the same campaign under the same circumstances as the experiment. (Other things to consider would be time of day, day of week, holidays, etc.)

Population: Total population you wish to understand more. In general, this would be your Active Contacts. 

Sample Size: A portion of the population in which you are surveying/experimenting to infer behavior or trends reflected in the total population. 

How should I determine who I sample, and how many variations should I have?
Based on statistical significance, we recommend the following minimum sample sizes based on your active contacts. You can reduce the margin of error by increasing your sample size, but the relationship is not linear (doubling your sample size will not halve your confidence interval). Additionally, you want your sample to be truly random. Are you pulling all active contacts or only a particular list(s)? For example, if you want to engage more with young millenials through your emails, running an experiment on your long-time donors will not necessarily give you the most relevant data to use in achieving your goal. 

The thing to remember with deciding the appropriate amount of variations: more does not mean better. 

These minimum recommendations have a 99% confidence level, and a confidence interval/margin of error of 2.

Population of Active Contacts Recommended Minimum Sample Size
10,000 2,938
25,000 3,567
50,000 3,841
75,000 3,942
100,000 3,994
125,000 4,026
150,000 4,048
175,000 4,064
200,000 4,075
300,000 4,103
400,000 4,117
500,000 4,126


With the above information in mind, let's begin your experiment! You can create 1 of 4 experiment types. Click on the experiment type of your choice to learn how to create that specific type of experiment. 

From Name: This will create an experiment with variances on the text that appears in the 'From Name' field in your patrons' inboxes. Recommended metric: Unique opens.

From Email: This will create an experiment with variances on the email address that appears as the sender in your patron's inboxes. Recommended metric: Unique opens.

Subject: This will create an experiment with variances on the text within the subject line. Recommended metric: Unique opens.

Design: This will create an experiment with variances on email designs. Recommended metric: Unique clicks or unique clicks on a specific link. 

PreheaderThis will create an experiment with variances on the text within the preheader line. Recommended metric: Unique opens.

With each experiment, you will want to consider another question: "Do I want to suppress contacts that have already received email communications from my organization of any kind from the past week?" If so, you'll want to check Failsafe sending in your experiment.

What is Failsafe Sending?
This option provides additional protection against multiple sends by considering all of your email-related account activity before sending each variation and the winner. You do not need to enable this option in order to ensure that, within the scope of the experiment, no single contact is sent to multiple times - this is always done automatically. 

Enabling this option ensures that any contact who was sent any email from your account within the 7 days prior to a given variation's send time or the winner's send time will not be sent that variation or the winner. 

Enabling the failsafe sending option can significantly impact the number of contacts that each variation and even the winner is ultimately sent to. Always refresh the total contact count after selecting this checkbox in order to see the estimated impact on your Test Group and the Remainder Segment (calculated as of the current day).

 

Still need help? Contact Us Contact Us