Reputation as measured by the ISPs

Part 3 in an ongoing series on campaign stats and measurements. In this installment, I will look a little closer at what other people are measuring about your email and how that affects your reputation at the ISPs.
Part 1: Campaign Stats and Measurements
Part 2: Measuring Open Rate
Reputation at the ISPs is an overall measure of how responsive recipients are to your email. ISPs also look at how much valid email you are sending. Anything the ISP can measure and use to distinguish good mail from bad is used in calculating reputation.
Some of the major metrics ISPs use include the following.
Invalid Address Rates
The ISPs count how much mail from any particular IP address is hitting non-existent addresses. If you are mailing a large number of email addresses that do not exist (550 user unknown), this is a suggestion that your address collection techniques are not very good. Responsible mailers do have the occasional bad address, including typos, expired/abandoned addresses, but the percentage in comparison to the number of real email addresses is low. How low is low? Public numbers suggest problems start at 10% user unknowns, but conversations with ISP employees show they consider lower levels a hint there may be a problem.
To calculate bounce rate ISPs take the total number of addresses that were for invalid accounts and divide that by the total number of addresses that the sender attempted to send mail to. Rates above 10% may cause significant delivery issues on their own, rates lower that 10% may still contribute to poor delivery through poor reputation scores.
Spamtraps
ISPs pay a lot of attention to how much mail is hitting their “trap” or “bait” accounts. There are a number of different sources of these trap accounts: old abandoned email addresses, addresses that never existed or even role accounts. Hits to a trap account tells the ISP there are addresses on your list that did not opt-in to receive mail. And if there are some addresses they know about that did not opt-in, it is likely that there are other addresses that did not opt in.
Spamtraps tend to be treated as an absolute number, not as a percentage of emails. Even a single spamtrap on a list can significantly harm delivery. According to the ReturnPath Benchmark report lists with a single spamtrap had nearly 20% worse delivery than lists without spamtraps.
This is spam clicks (FBL complaints)
Complaints from users are heavily used by ISPs. This tells them directly how many people are objecting to your email. In this case, permission is removed from the equation. Even if a sender has permission to send email, the recipient can say “no, I don’t want this, it is spam.” The ISPs put more weight on what their users tell them than on what the senders tell them.

The customer is always right. In my opinion, there is no such thing as ‘overuse’ of the report spam button. The more feedback we get, the better. Our job is to keep the user’s inbox in the state they want it. The more they tell us what they do and don’t want, the clearer picture we get about who is sending unwanted mail. So I would say, yes, it does affect my ability to do my job in that it enables me to actually do my job.
It might cause my job to involve more detailed research into people’s preferences and what to do with mail that people disagree about, but I don’t see that as a problem.
Just because a marketer doesn’t like that we consider our users’ opinions to be more important than theirs is not really a problem either as far as I’m concerned. I’m here to serve my users, not them. They can either send mail that people don’t respond negatively to, or I can put their mail in the spamfolder. It’s not like they are going to make any money by repeatedly mailing people who think their mail is spam anyway.

In many ways relying on the users to provide feedback is a good thing. The ISP gets a direct measure of what the recipients think without having to filter through a lot of obfuscation from bad senders.
Complaint percentages are measured by taking the number of “this is spam” clicks divided by the number of emails delivered to the inbox. Percentages under 0.3% usually result in reasonably good delivery, depending on other metrics. Percentages higher than 1% usually result in poor delivery, even if other metrics are good.
“This is not spam“ clicks
This measures how many people tracked down wanted mail delivered to the spam folder and tell the ISP the mail is not spam. These clicks are vital in reputation scores. Senders who are having intermittent bulk foldering are most affected by these types of stats. If your recipients don’t care enough about your mail to go into the bulk folder and find it, then the ISP believes that the mail is not necessarily wanted.
ISPs calculate the ratio’s differently and there is not a standard formula for how valuable a this is not spam click is.
The secret sauce
This is what distinguishes one ISPs filtering process from another. Each ISP and each spam filtering company has their own secret sauce. The things above are things that the ISPs have confirmed to me that they measure. I believe they also measure other things, including recipient profiles, recipient clicks, probably some stuff they won’t ever admit outside a development meeting. The secret sauce is also how they weight the different factors. ie, a this is spam click is not weighted the same as a this is not spam click. How important are complaints versus mail sent to dead addresses? How vital are spam traps? Some ISPs probably even have trusted reporter setups where people with good histories of accurately reporting spam have their reports weighted more heavily than unknown people.
The good news is, with the exception of the secret sauce, all of these factors are under control of the sender and senders can make changes to their mailing programs that will improve reputations and delivery.

Related Posts

Campaign stats and measurements

Do you know what your campaign stats mean? Do you know what it is that you’re measuring? I think there are a lot of emailers out there who have no idea what they are measuring and what those measurements mean.
The most common measurement used is “open rate.” There’s been quite a bit of discussion recently about open rates, how they’re calculated, and is there a better way. In my own opinion, open rate can be useful, but only in some circumstances. More often it is a distraction from real measurements
Not only has there been the recent discussions about “open rate” versus “render rate” and a lot of confusion among people about what the underlying issues are, but I’ve also been working through some campaign stats questions with other people that indicate maybe they don’t actually understand the numbers they’re using.
For instance, what do the delivery statistics reported by the various mailbox monitoring companies mean? If you have 100% inbox delivery as measured by the program, does that mean all your mail has reached the recipient’s inbox?
What about bounce rates? Everyone says “keep them low” but what does low mean? How do you measure them?
Over the next few posts, I’ll talk about the different stats and measurements in common use and what they do and don’t mean.

Read More

Palpable ennui

Put any group of senders together and the conversation invariably turns to discussions of how to get email delivered to the Inbox. There is an underlying flavor to most of these conversations that is quite sad. Many senders seem to believe that the delivery of their email is outside of their control and that since the ISPs are difficult to reach that senders are stuck. The ennui is palpable.
I am here to tell you that nothing could be further from the truth!
Senders are not passive victims of the evil ISPs. In 99% of cases, delivery problems are fully under the control of the sender.
Mail being deferred? Mail being blocked? Mail being delivered to the bulk folder? Senders do NOT NEED TO CALL THE ISP to fix most of these. Tickets do not need to be opened nor do personal contacts need to be employed. You can resolve the vast majority of problems with data you already have.

Read More

Building a list for the long term

Mark Brownlow asks 2 key questions senders should be thinking about for their list building strategy for 2009.

Read More