What email metrics do you use?

Vertical Response talks about email metrics that are useful on a dashboard.
Metrics are an ongoing challenge for all marketers. The underlying need for metrics is to evaluate how effective a particular marketing program is. Picking metrics involves understanding what the goal is for a particular program. If your goal is brand recognition then perhaps sales and click-through figures aren’t a good metric. If your goal is sales then opens is not as good a metric as average order value or revenue per email.
Measuring email success is important. But how you choose to measure it is a critical decision. Too many marketers just use canned metrics and don’t think about what they really want to know.

Related Posts

Standardizing email metrics

Slogging towards e-mail metrics standardization a report by Direct Mag on the efforts of the Email Experience Council to standardize definitions related to email marketing.

Read More

Failed delivery of permission based email

A few weeks ago, ReturnPath published a study showing that 20% of permission based email was blocked. I previously discussed the definition of permission based email and that not all the mail described as permission based is actually sent with the permission of the recipient. However, I only consider this a small fraction of the mail RP is measuring, somewhere in the 3 – 5% range. What happens with the other 17 – 15% of that mail? Why is it being blocked?
There are 3 primary things I see that cause asked for and wanted email to be blocked.

Read More

Delivery Metrics

Last week ReturnPath published a study that shows 20% of permission based email fails to be delivered to the inbox. For this study, ReturnPath looked at the mail sent by their mailbox monitor customers and counted the number of deliveries to the inbox, the number of deliveries to the bulk folder and the number of emails that were not delivered.
At US ISPs 21% of the permission based emails sent to the ReturnPath probe network did not make it to the inbox. 3% of the emails sent went to the bulk folder and 17% did not make it to the mailbox at all.  MSN/Hotmail and Gmail were the worst ISPs to get mail to. They each failed to deliver more than 20% of the mail that was sent to them. At Canadian ISPs, even less of the mail made it to the inbox, primarily because primus.ca is such a large portion of the Canadian market and they use Postini as a filter. Postini is a quite aggressive filter and takes no feedback from senders.
ReturnPath’s take home message on the survey is that one set of metrics is not enough to effectively evaluate a marketing program. Senders need to know more about their mailings than they can discover from just the bounce rate or the revenue rate or response rate or open rate.
There are a lot of reasons an email doesn’t get to the recipient’s inbox or bulk folder. Mail can be hard blocked at the MTA, and rejected by the ISP outright. Mail can be soft blocked at the MTA and the ISP can slow down sending. Sometimes this is enough to cause the sending MTA to stop attempting to deliver the mail, thus causing mail to not show up. Both of these types of blocks are usually visible when looking at the bounce rate.
Some ISPs accept mail but then fail to deliver it to the recipient. Everything on the sender end says the ISP accepted it for delivery but the ISP just drops it on the floor. This is the type of block that a mailbox monitoring program is best able to identify.
Despite all the discussions of numbers, many marketers are still not measuring the variables in their email campaigns. Ken Magill wrote today about a study released by eROI that indicates more than a third of marketers are not doing any testing on their mailings.
Now, both of these studies are done in an attempt to sell products, however, the numbers discussed should be making smart senders think about what they are measuring in regards to their email campaign, how they are measuring those factors and what the measurements mean.

Read More