Metrics

Measurements

One of the things I’ve been spending a lot of time thinking about lately is how we measure deliverability. Standard deliverability measurements include: opens, bounces, complaints, and clicks. There are also other tools like probe accounts, panel data, and public blocklists. Taken together these measurements and metrics give us an overall view of how our mail is doing.

Read More

How accurate are reports?

One of the big topics of discussion in various deliverability circles is the problems many places are seeing with delivery to Microsoft properties. One of the challenges is that Microsoft seems to be happy with how their filters are working, while senders are seeing vastly different data. I started thinking about reporting, how we generate reports and how do we know the reports are correct.

Read More

Improving Gmail Delivery

Lately I’m hearing a lot of people talk about delivery problems at Gmail. I’ve written quite a bit about Gmail (Another way Gmail is different, Gmail filtering in a nutshell, Poor delivery at Gmail but no where elseInsight into Gmail filtering) over the last year and a half or so. But those articles all focus on different parts of Gmail delivery and it’s probably time for a summary type post.

Read More

Engagement, Engagement, Engagement

I saw a headline today:
New Research from Return Path Shows Strong Correlation Between Subscriber Engagement and Spam Placement
I have to admit, my first reaction was “Uh, Yeah.” But then I realized that there are some email marketers who do not believe engagement is important for email deliverability. This is exactly the report they need to read. It lays out the factors that ISPs look at to determine if email is wanted by the users. Senders have to deal with vague metrics like opens and clicks, but the ISPs have access to user behavior. ISPs can see if mail is replied to, or forwarded or deleted without reading. They monitor if a user hits “this-is-spam” or moves the message to their junk folder. All of these things are signals about what the users want and don’t want.
Still, there are the folks who will continue to deny engagement is a factor in deliverability. Most of the folks in this group profit based on the number of emails sent. Therefore, any message about decreasing sends hurts their bottom line. These engagement deniers have set out to discredit anyone who suggests that targeting, segmentation or engagement provide for better email delivery and getting emails to the inbox.
There’s another group of deniers who may or may not believe engagement is the key to the inbox, but they don’t care. They have said they will happily suffer with lower inbox delivery if it means they can send more mail. They don’t necessarily want to discredit deliverability, but they really don’t like that deliverability can stop them from sending.
Whether or not you want to believe engagement is a critical factor in reaching your subscribers, it is. Saying it’s not doesn’t change the facts.
There are three things important in deliverability: engagement, engagement, engagement.

Read More

February 2015 – The month in email

This was a short and busy month at WttW!

We attended another great M3AAWG conference, and had our usual share of interesting discussions, networking, and cocktails. I recapped our adventures here, and shared a photo of the people who keep your email safe while wearing kilts as well. We also commended Jayne Hitchcock on winning the Mary Litynski award for her work fighting abuse and cyberstalking.

Read More

Meaningless metrics

I’ve been having some conversations with fellow delivery folks about metrics and delivery and bad practices. Sometimes, a sender will have what appear to be good metrics, but really aren’t getting them through any good practices. They’re managing to avoid the clear indicators of bad practices (complaints, SBL listings, blocks, etc), but only because the metrics aren’t good.
This made me laugh when a friend posted a link to a Business Insider article about how many website metrics aren’t useful indicators of the business value of a website.  Then I found the original blog post referenced in the article: Bullshit Metrics. It’s a great post, you should go read it.
I’d say the concluding paragraph has as much relevance to email marketing as to web marketing.

Read More

Metrics, metrics, metrics

I’ve been sitting on this one for about a week, after the folks over at IBM/Pivotal Veracity called me to tell me about this. But now their post is out, so I can share.
There are ISPs providing real metrics to senders: QQ and Mail.ru. Check out Laura Villevieille’s blog post for the full details.

Read More

Reporting email disposition

Most regular readers know I think open and click through rates are actually proxy measurements. That is they measure things that correlate with reading and interacting with an email and can be used to estimate how much an email is wanted by the recipients.
The holy grail is, of course, having ISPs report back exact metrics on what a user did with an email. Did the user read it? Did it stay open on their screen a long time? Did the user just mark it read or throw it away? What happened to the message. Marketers would love this information.
It’s unlikely the ISPs will ever provide this information to marketers. Take away all the technical challenges, and there are some significant ones there are still social challenges to making this data available. Current user contracts protect the privacy of the user, local laws prohibit sharing this data. And, there is the vocal group of privacy advocates that will protest and raise a big stink.
I’m not sure why email is gets the special treatment of expecting the channel owners to provide detailed disposition data. In no other direct marketing venue is that information collected or provided. TV stations can’t tell advertisers whether or not someone watched a commercial, fast forwarded through it or got up to grab a beer from the fridge. The post office can’t tell direct mail marketers whether or not a recipient read the mail or just dumped it in the big recycling bin the post office provides for unwanted messages. Billboard owners can’t tell advertisers how many people read the billboard.
Since we can’t get exact read rates from ISPs, what do we do? We look at proxy numbers.
Read rate directly measures who opened the message. Open rate is a proxy. It’s who displayed images in the message.
Read rate can be measured only by people who have access to the user’s inbox. The ISPs can measure read rate because they have full access to the mailbox, but this requires the user to access the mailbox through webmail or IMAP. Some third party mailbox addons can measure it, but this requires the cooperation of the mailbox owner. If the mailbox owner doesn’t install the reporting tool, then the 3rd party doesn’t have access to the data. Only groups with access to the end users mailbox can measure this rate.
Open rate can be measured by people who have access to the server images are hosted. Senders and ESPs and 3rd parties can measure it if they provide unique image IDs or tracking pixels in their emails. Open tracking does require the cooperation of the recipient – they have to have images on. No images on, no open tracking. Ironically, ISPs cannot measure open rate, because they have no access to the image hosting servers.
Click rate can be measured by people who have access to the server that hosts the website. The same people who can measure opens can measure clicks. Some ISPs can measure clicks, Hotmail used to pass every URL through a proxy they hosted and they could count clicks this way. AOL controls the client so they could measure number of clicks on a link. I’ve heard trustworthy folks claim that ISPs are measuring clicks and that they’re not measuring clicks (any of the Barry’s want to comment?).
Without controlling the inbox, though, senders have to rely on proxy measurements to judge the effectiveness of any particular campaign. But at least email marketers have proxies to use for measurement.

Read More

More on Yahoo and Engagement

A friend of the blog contacted me earlier today and pointed out that the news that Dan posted about Yahoo and engagement that I blogged about last week was actually reported by George Bilbrey in a Mediapost article on August 1.

Read More

Yahoo looking harder at engagement

In a post this morning, Dan Deneweth from Responsys says he’s received confirmation from Yahoo that they have increased the value of engagement metrics when making delivery decisions.
The really great thing, for the ISPs, about engagement metrics is that they directly measure how much a particular email is wanted by recipients. There’s no guessing about it, it measures how engaged the recipient is with a mail. Even better is the fact that, unlike proxy metrics, engagement metrics are extremely difficult for the sender to manipulate. As a sender I can artificially lower complaints and bounces without improving the mail I’m sending. But I can’t improve engagement metrics without actually engaging my recipients.
As I wrote back in 2010:

Read More

Bounces, complaints and metrics

In the email delivery space there are a lot of numbers we talk about including bounce rates, complaint rates, acceptance rates and inbox delivery rates. These are all good numbers to tell us about a particular campaign or mailing list. Usually these metrics all track together. Low bounce rates and low complaint rates correlate with high delivery rates and high inbox placement.

Read More

What email metrics do you use?

Vertical Response talks about email metrics that are useful on a dashboard.
Metrics are an ongoing challenge for all marketers. The underlying need for metrics is to evaluate how effective a particular marketing program is. Picking metrics involves understanding what the goal is for a particular program. If your goal is brand recognition then perhaps sales and click-through figures aren’t a good metric. If your goal is sales then opens is not as good a metric as average order value or revenue per email.
Measuring email success is important. But how you choose to measure it is a critical decision. Too many marketers just use canned metrics and don’t think about what they really want to know.

Read More

Standard Email Metrics

The EEC has been working on standardizing metrics used in email marketing. They have published a set of definitions for different terms many email marketers use. They published their Support the Adoption of Email Metrics (S.A.M.E) guide in June.
Under the new EEC definitions an open is measured when either a tracking pixel is displayed or a user clicks on any link in the email, including the unsubscribe link. Open rate is defined as the number of opens (either unique or total) divided by the number of accepted emails. Accepted emails equals the number of emails sent minus the number of emails rejected by the ISP for any reason.
The authors do caution, however, that even their measurements may under count the number of email subscribers that actually open or read an email. Some readers don’t load images or click on links but happily read and digest the content being sent. Others may not click on a link but actually visit a website or brick and mortar store to purchase something based on the email.
Overall, I think the definitions created by the S.A.M.E. group accurately reflect the things they want to measure within the limits of what is actually measurable. Their definitions won’t affect conversations in the short term, but are likely to drive change to standard terminology over the longer term. I do strongly encourage people to grab a copy of their document and see how their definitions compare with your current measurements.

Read More

When an open is not a sign of interest

A lot of people, including myself, are using opens as one of the measures of engagement. This, as a general rule, is not a bad measure. However, there are people who will open email not because they’re interested in it, but because they know it is spam.
Take, for instance, the email address I acquired in 1993. Yes, I still have this address. I stopped using it to sign up for lists in 1999 and stopped using it for most of the rest of my mail around 2001. This address, though, is on any number of spam mailing lists. The spam that gets through is usually sent by hard-core spammers. The ISP that hosts that mailbox uses Communigate Pro to filter mail, so much of the casual spam is filtered.
Generally, if I open an email (and load images or click through) on that account it is only in order to track down a spammer. For instance, I’m getting a lot of spam there from affiliates offering me the opportunity to purchase printing services for a very low price. I have actually been opening the mail, and clicking through. But I’m not clicking through because I’m interested in purchasing. I’m clicking through to see if my reports to abuse@ printer are resulting in any action against the spammers. (They’re not).
The thing is, though, I know that by clicking through on ads, I’ve now been promoted by the spammer to the “clicks on emails! it’s a live address!” list. Which only means I’m going to get more spam from them. Lucky me.
Using clicks and opens as a measure of engagement isn’t necessarily bad. But when using them you have to understand the limitations of the measurement and that what you may think it’s telling you isn’t actually what it’s telling you.

Read More

Standardizing email metrics

Slogging towards e-mail metrics standardization a report by Direct Mag on the efforts of the Email Experience Council to standardize definitions related to email marketing.

Read More

Failed delivery of permission based email

A few weeks ago, ReturnPath published a study showing that 20% of permission based email was blocked. I previously discussed the definition of permission based email and that not all the mail described as permission based is actually sent with the permission of the recipient. However, I only consider this a small fraction of the mail RP is measuring, somewhere in the 3 – 5% range. What happens with the other 17 – 15% of that mail? Why is it being blocked?
There are 3 primary things I see that cause asked for and wanted email to be blocked.

Read More

Permission Based Emails? Are you sure?

Yesterday I wrote about the ReturnPath study showing 21% of permission based email does not make it to the inbox. There are a number of reasons I can think of for this result, but I think one of the major ones is that not all the mail they are monitoring is permission based. I have no doubt that all of the RP customers say that the mail they’re sending is permission based, I also have no doubt that not all of the mail is.
Everyone who sends mail sends permission based email. Really! Just ask them!
In 10 years of professionally working with senders I have yet to find a marketer that says anything other than all their email is permission based. Every email marketer, from those who buy email addresses to those who do fully confirmed verified opt-in with a cherry on top will claim all their email is permission based. And some of the mailers I’ve worked with in the past have been listed on ROKSO. None of these mailers will ever admit that they are not sending permission based email.
Going back to ReturnPath’s data we don’t really know what permission based email means in this context and so we don’t know if the mail is legitimately or illegitimately blocked. My guess is that some significant percentage of the 20% of email to the probe accounts that doesn’t make it to the inbox is missing because the sender does not have clear recipient permission.
When even spammers describe their email as permission based email marketing, what value does the term have?

Read More

Delivery Metrics

Last week ReturnPath published a study that shows 20% of permission based email fails to be delivered to the inbox. For this study, ReturnPath looked at the mail sent by their mailbox monitor customers and counted the number of deliveries to the inbox, the number of deliveries to the bulk folder and the number of emails that were not delivered.
At US ISPs 21% of the permission based emails sent to the ReturnPath probe network did not make it to the inbox. 3% of the emails sent went to the bulk folder and 17% did not make it to the mailbox at all.  MSN/Hotmail and Gmail were the worst ISPs to get mail to. They each failed to deliver more than 20% of the mail that was sent to them. At Canadian ISPs, even less of the mail made it to the inbox, primarily because primus.ca is such a large portion of the Canadian market and they use Postini as a filter. Postini is a quite aggressive filter and takes no feedback from senders.
ReturnPath’s take home message on the survey is that one set of metrics is not enough to effectively evaluate a marketing program. Senders need to know more about their mailings than they can discover from just the bounce rate or the revenue rate or response rate or open rate.
There are a lot of reasons an email doesn’t get to the recipient’s inbox or bulk folder. Mail can be hard blocked at the MTA, and rejected by the ISP outright. Mail can be soft blocked at the MTA and the ISP can slow down sending. Sometimes this is enough to cause the sending MTA to stop attempting to deliver the mail, thus causing mail to not show up. Both of these types of blocks are usually visible when looking at the bounce rate.
Some ISPs accept mail but then fail to deliver it to the recipient. Everything on the sender end says the ISP accepted it for delivery but the ISP just drops it on the floor. This is the type of block that a mailbox monitoring program is best able to identify.
Despite all the discussions of numbers, many marketers are still not measuring the variables in their email campaigns. Ken Magill wrote today about a study released by eROI that indicates more than a third of marketers are not doing any testing on their mailings.
Now, both of these studies are done in an attempt to sell products, however, the numbers discussed should be making smart senders think about what they are measuring in regards to their email campaign, how they are measuring those factors and what the measurements mean.

Read More

The Weekend Effect

Sending mail only Monday through Friday can cause reputation and delivery problems at some ISPs, even when senders are doing everything right. This “weekend effect” is a consequence of how ISPs measure reputation over time.
Most ISPs calculating complaint rate use a simple calculation. They measure how many “this is spam” clicks a source IP generates in a 24 hour period. Then they divide that number by how many emails were delivered to the inbox in the same 24 hour period.
The weekend effect happens when a sender sends on weekdays and not on the weekend thus lowering the number of emails delivered to the to the inbox. Recipients, however, still read mail on the weekend, and they still hit the “this is spam” button on the email. Even if the number of “this is spam” clicks is lower than a normal weekday, with no incoming email the rate of spam complaints goes above ISP thresholds. Even a very well run mailing program may see spikes in complaint rate on the weekends.
Now, when the ISPs are measuring complaint rates over time, they take the average of the average complaint rates. If the rates spike high enough on the weekend (and they can spike to the 1 – 3% range, even for a well run list), that can hurt the senders’ reputation.
The good news is that ISPs are aware of the weekend effect and take this into account when manually looking at complaints. The bad news is that not all of the major ISPs take this into account when programatically calculating reputation.
There isn’t very much senders can do to combat the weekend effect, except be aware this can happen and may be responsible for poor mailing performances on Monday. If you are seeing delivery problems you think may be a result of the weekend effect you can contact the ISPs and ask for manual review of your reputation. Some ISPs can provide manual mitigation for senders with otherwise clean stats. d

Read More