Meaningless metrics

I’ve been having some conversations with fellow delivery folks about metrics and delivery and bad practices. Sometimes, a sender will have what appear to be good metrics, but really aren’t getting them through any good practices. They’re managing to avoid the clear indicators of bad practices (complaints, SBL listings, blocks, etc), but only because the metrics aren’t good.
This made me laugh when a friend posted a link to a Business Insider article about how many website metrics aren’t useful indicators of the business value of a website.  Then I found the original blog post referenced in the article: Bullshit Metrics. It’s a great post, you should go read it.
I’d say the concluding paragraph has as much relevance to email marketing as to web marketing.

Despite the internet’s evolution, bullshit metrics perpetuate a constant cycle of poor understanding. Let’s strive to understand how our businesses are doing and to pick better metrics–the harsher, the better. Let’s stop fooling ourselves with numbers that don’t represent reality. And let’s push the industry forward as a whole because collectively we’ll all benefit.

The sooner we can get away from opens as a useful email metric, the better the email industry is going to be.
 

Related Posts

Standard Email Metrics

The EEC has been working on standardizing metrics used in email marketing. They have published a set of definitions for different terms many email marketers use. They published their Support the Adoption of Email Metrics (S.A.M.E) guide in June.
Under the new EEC definitions an open is measured when either a tracking pixel is displayed or a user clicks on any link in the email, including the unsubscribe link. Open rate is defined as the number of opens (either unique or total) divided by the number of accepted emails. Accepted emails equals the number of emails sent minus the number of emails rejected by the ISP for any reason.
The authors do caution, however, that even their measurements may under count the number of email subscribers that actually open or read an email. Some readers don’t load images or click on links but happily read and digest the content being sent. Others may not click on a link but actually visit a website or brick and mortar store to purchase something based on the email.
Overall, I think the definitions created by the S.A.M.E. group accurately reflect the things they want to measure within the limits of what is actually measurable. Their definitions won’t affect conversations in the short term, but are likely to drive change to standard terminology over the longer term. I do strongly encourage people to grab a copy of their document and see how their definitions compare with your current measurements.

Read More

Metrics, metrics, metrics

I’ve been sitting on this one for about a week, after the folks over at IBM/Pivotal Veracity called me to tell me about this. But now their post is out, so I can share.
There are ISPs providing real metrics to senders: QQ and Mail.ru. Check out Laura Villevieille’s blog post for the full details.

Read More

What email metrics do you use?

Vertical Response talks about email metrics that are useful on a dashboard.
Metrics are an ongoing challenge for all marketers. The underlying need for metrics is to evaluate how effective a particular marketing program is. Picking metrics involves understanding what the goal is for a particular program. If your goal is brand recognition then perhaps sales and click-through figures aren’t a good metric. If your goal is sales then opens is not as good a metric as average order value or revenue per email.
Measuring email success is important. But how you choose to measure it is a critical decision. Too many marketers just use canned metrics and don’t think about what they really want to know.

Read More