One of the things I miss about being in science is the regular discussions (sometimes heated) about data and experimental results. To be fair, I get some of that when talking about email stuff with Steve. We each have some strong view points and aren’t afraid to share them with each other and with other people. In fact, one of the things we hear most when meeting folks for the first time is, “I love it when you two disagree with each other on that mailing list!” Both of us have engineering and science backgrounds, so we can argue in that vein.
One of the challenges of seemingly contradictory data is figuring out why it seems to disagree. Of course, in science the first step is always to look at your experimental design and data collection. Did I do the experiment right? (Do it again. Always do it again.) Did I record the data correctly? Is the design right? So what did I do differently from what you did? For instance, at one of my labs we discovered that mixing a reagent in plastic tubes created a different outcome from mixing the reagent in glass vials. So many variables that you don’t even think of being variables that affect the outcome of an experiment.
What’s that got to do with email?
In the email space we have lots of people sharing data. Some of it is data we like – that is data that confirms our perceptions. And some of it is data we don’t like – data that contradicts our perceptions.
Recently two different ESPs have published contradictory data about purging subscribers and removing recipients from your lists. Mailchimp published Inactive subscribers are still valuable customers and Hubspot published What Happened to Our Metrics After We Stopped Sending So Much Email.
These two publications seem to be a bit contradictory. One is saying that inactive subscribers, subscribers who haven’t opened or clicked on emails in a while, are still valuable sources of revenue. The other is saying removing inactive subscribers increases email metrics like opens and clicks. So what’s really going on?
Different methods measure different things
Mailchimp looked at the revenue generated by inactive subscribers as compared to revenue from non-subscribers. They specifically looked at e-commerce senders mailing to previous purchasers.
Hubspot looked at various email metrics and how they changed when removing subscribers. They specifically looked at recipients to their own mailing list.
In many ways that’s the end of the story. The two studies look at different things. They looked at different populations. They measured different things. They are not comparable. They’re not even really contradictory due to the significant differences in the study population.
Well, that’s not very useful
Sorry. Research tells us answers, but doesn’t always give us clear and actionable answers.
The reality is, neither of these were designed experiments. Rather, they both describe observed behavior in “the wild” as it were. The research is much closer to epidemiology than any other branch of science. Epidemiology tells us what happens, but doesn’t necessarily tell us how to either make something happen or stop something from happening. Back when I was taking poultry pathology in grad school we did quite a bit of epidemiology and it’s HARD. For instance, one example we studied was an avian disease outbreak that seemed totally random. After months and months of work, research, interviews and study they finally figured out the infection was being carried on the car tires of a particular sales person. That’s how hard epidemiology is.
A lot of deliverability and email marketing is like epidemiology. We know what worked in the past, but sometimes we’re chasing a guy with a contagious disease on his tires.
No, really, what do you think about the data?
I think the data is right. And I do think we can take some lessons from it.
- Hubspot did see increased email engagement with their subscribers when they stopped mailing quite so much.
- Mailchimp customers did see actual revenue from their inactive subscribers.
Let’s rephrase what Mailchimp said they discovered: Inactive subscribers buy more than non-subscribers and don’t buy as much as active subscribers. That’s one of those things that my only response is, “Well, yes, we all kinda knew that but it’s nice someone did the work.”
Let’s rephrase what Hubspot said they discovered: If you send too much mail you wear out your receivers and they pay less attention. Again, we knew that.
But what didn’t they say?
- Mailchimp didn’t mention delivery changes.
- Hubspot didn’t mention revenue.
We don’t know whether Mailchimp saw deliverability differences. In the face of more revenue, it’s not really an issue but their delivery stats may have been worse.
We don’t know if Hubspot saw increased revenue (although we do have their 10-K that shows some revenue increase). But they’re not a commerce shop, they’re not directly selling through email. Their emails drive potential readers. Eventually the hope is (I’m assuming) the readers will convert, but the Hubspot emails are not the same as e-commerce email.
You didn’t answer the question.
I did, though. Both things are true.
If you are in e-commerce you’ll make revenue from your inactive subscribers; so you should prune them carefully.
If you are driving site engagement you’ll increase readership by removing inactive subscribers; so you can probably be more aggressive in pruning.
What’s right for my program?
Is your program closer to the mail studied by Mailchimp? Or is your program closer to the mail studied by Hubspot?
We work with a lot of different kinds of senders and work with them to find the answer right for their business, their subscribers and their marketing program. Sometimes it means pruning, sometimes it doesn’t. Contact us for more information on how we can help your program make sense of seemingly conflicting data.