Let’s face it: you’re not split testing your newsletter headlines or content right now.
I know a lot of authors. I know a lot of very successful authors. And none of them are doing this. Here’s why every single one of you, successful or not, should absolutely, 100% be doing this all the time.
First of all, what is split testing?
Split testing is when you send 2 (or more) different versions of an email to your list. So for example, let’s say you have a medium sized list (for an author) of 2000 readers, and you want to split test two headlines for your latest release. You would set it up so that 1000 of your readers get an email with this headline:
Check out Jane Smith’s latest release now!
And then the other 1000 readers on your list would get an email with this headline instead:
Jane Smith’s newest hit: read the rave reviews!
Once you start collecting data, you can start to figure out what headlines generate the highest click through rates. You can also split test your newsletter’s content.
Split testing your headline will show the results through the email’s open rate (the percentage of people that see a headline and open the email.
Split testing your content will mostly show the results through the email’s click through rate (the percentage of people who click on the links to your books in the body of the email). I say mostly because the headline you’ve used will also impact whether or not they click, but it won’t impact it as much as the email body text.
“But Liv,” you ask, “why should I bother split testing with just a couple thousand subscribers?”
Well the thing is, you don’t even need a couple thousand subscribers to have an impact.
But for the sake of continuing with the example of our list of 2000 people, let me use that one to show you just how valuable split testing can be.
Lets assume that your list of 2000 people has an open rate, on average, of 35%. That means, every time you send an email, around 700 people open it.
We’ll also assume that your click rate is around 15%. That means for every email you send, around 300 people – or around 40% of the people who opened the email – will click the link to your book. If 40% of those people then buy your book, you’ve made 120 sales from 2000 emails.
That’s not bad (it’s 120 sales you wouldn’t have had otherwise) but it’s not great either.
What happens if you split test enough that you increase your numbers?
Now let’s assume that the only thing you split test is your headline; you continue using the same content you’ve always used.
After some split testing, you’ve managed to get your open rate up to 60%. This is in no way unrealistic, by the way. You just need to test, test, test!
Now from your list of 2000, you’re getting 1200 opens! If you maintain a 40% click rate from people who open your email, that means you’ve gotten 480 clicks to your book. This brings you to a 24% click rate overall, just from changing the headline. And, if 40% of them buy, you’ve gotten 192 sales instead of 120.
That’s much, much better, right? Especially to help bump you up the lists on launch day. Think of what kind of impact you would have by getting an extra 70 sales on day 1.
Now imagine if you get that click through rate from people who have opened your email up to 60%. And done a good enough job selling your book that 50% of clickers buy. Now you’re looking at 360 sales. Up from your original figure of 120. You’ve literally tripled your conversion rate.
Suddenly split testing your email headlines and content seems like a good idea, right?
But what if your list is a lot smaller? What if you only have 200 people on your list, instead of 2000? Is it still worth it?
Of course it is!
If your mailing list is only 200 people, with the same stats as above you would go from around 12 sales on launch day to 36. And chances are if your list is only 200 people big, those extra sales might be a big difference for you!
Besides, you don’t plan on having a list of 200 forever. Eventually you want to increase those numbers. If you learn how to get great results through testing when you only have 200 subscribers, you’ll sell a lot more as your list grows to 2000 and beyond than if you wait until you have a larger list to start testing.
The main thing you need to worry about when split testing, especially with a smaller list, is statistical significance.
What is statistical significance?
Statistical significance is basically a margin of error. If you flip a coin twice, you might get heads twice. So if you conclude from that result that if you flip a coin 100 times you’ll get heads 100 times, you’d be wrong.
So here, if you send a headline or content to too few people, you might get skewed results.
Personally, I always make sure to send a different headline or content to at least 100 people, and if I can, at least 500.
So if your email list only has 100 people on it, I would try to grow it to at least 200 people before starting split testing, otherwise you might end up with some false data.
If you have a big list, you can try split testing 3-4 headlines at a time if your email provider allows for it (most only allow A/B). Just try and make sure you have good numbers for each split test segment before you test it. But if you have a list of under 1000 people, I would stick to testing 2 different headlines/content at a time. Remember, the bigger your sample size, the more statistically relevant your results.
“All this sounds great Liv, but how do I actually do the split test?”
Luckily, pretty much every email list provider these days has a built-in split tester. I use MailerLite, and it’s super, super easy to do.
When you click the “create campaign” button, the very first thing it asks you is to choose a campaign type. Instead of going with a “regular campaign” just check the “A/B Split campaign” button next to it on the top. And then it becomes really self explanatory. See the picture below for reference!
Note how MailerLite doesn’t allow you to choose to split test more than one option at once. You have to choose between the subject, the from name (which honestly, I’ve never tested), and the email content.
It’s good that it does this, because you want to test everything in a vacuum as much as possible. Always only ever test one variable at a time. If you’re testing new subjects and content at the same time, you won’t know if more people clicked through your links because the subject line convinced them to, or because the content did.
Only ever test one thing at a time. This is really important, and will allow you to get more accurate results long term.
Once you’ve created your two types of content, you can very easily choose what percentage of people are going to get each result.
Honestly, this part of MailerLite is one of the few things I don’t like about their system (but I still use them because my lists are big enough that this doesn’t affect me). They don’t allow you to choose to split test 50/50 completely. MailerLite will only allow you to send your A test to 25% of your list, your B test to 25% of your list, and then it chooses the winner out of that 50% by a stat that you dictate (I always choose clicks and not opens – after all, your ultimate goal is to get as many people actually buying the book as possible) and sends the remaining 50% of the emails to that group.
If you have a very small list, MailChimp might actually be better for you (those are some words I never thought I’d say!) to split test, as MailChimp does allow you to do a real 50/50 split.
So give it a shot. After all, what do you have to lose? Try out some different headlines. Try out some different formatting for your body content, and some different text. You might just find yourself getting way better newsletter results than you ever have in your life.