Reputation as measured by the ISPs
Part 3 in an ongoing series on campaign stats and measurements. In this installment, I will look a little closer at what other people are measuring about your email and how that affects your reputation at the ISPs.
Part 1: Campaign Stats and Measurements
Part 2: Measuring Open Rate
Reputation at the ISPs is an overall measure of how responsive recipients are to your email. ISPs also look at how much valid email you are sending. Anything the ISP can measure and use to distinguish good mail from bad is used in calculating reputation.
Some of the major metrics ISPs use include the following.
Invalid Address Rates
The ISPs count how much mail from any particular IP address is hitting non-existent addresses. If you are mailing a large number of email addresses that do not exist (550 user unknown), this is a suggestion that your address collection techniques are not very good. Responsible mailers do have the occasional bad address, including typos, expired/abandoned addresses, but the percentage in comparison to the number of real email addresses is low. How low is low? Public numbers suggest problems start at 10% user unknowns, but conversations with ISP employees show they consider lower levels a hint there may be a problem.
To calculate bounce rate ISPs take the total number of addresses that were for invalid accounts and divide that by the total number of addresses that the sender attempted to send mail to. Rates above 10% may cause significant delivery issues on their own, rates lower that 10% may still contribute to poor delivery through poor reputation scores.
ISPs pay a lot of attention to how much mail is hitting their “trap” or “bait” accounts. There are a number of different sources of these trap accounts: old abandoned email addresses, addresses that never existed or even role accounts. Hits to a trap account tells the ISP there are addresses on your list that did not opt-in to receive mail. And if there are some addresses they know about that did not opt-in, it is likely that there are other addresses that did not opt in.
Spamtraps tend to be treated as an absolute number, not as a percentage of emails. Even a single spamtrap on a list can significantly harm delivery. According to the ReturnPath Benchmark report lists with a single spamtrap had nearly 20% worse delivery than lists without spamtraps.
This is spam clicks (FBL complaints)
Complaints from users are heavily used by ISPs. This tells them directly how many people are objecting to your email. In this case, permission is removed from the equation. Even if a sender has permission to send email, the recipient can say “no, I don’t want this, it is spam.” The ISPs put more weight on what their users tell them than on what the senders tell them.
The customer is always right. In my opinion, there is no such thing as ‘overuse’ of the report spam button. The more feedback we get, the better. Our job is to keep the user’s inbox in the state they want it. The more they tell us what they do and don’t want, the clearer picture we get about who is sending unwanted mail. So I would say, yes, it does affect my ability to do my job in that it enables me to actually do my job.
It might cause my job to involve more detailed research into people’s preferences and what to do with mail that people disagree about, but I don’t see that as a problem.
Just because a marketer doesn’t like that we consider our users’ opinions to be more important than theirs is not really a problem either as far as I’m concerned. I’m here to serve my users, not them. They can either send mail that people don’t respond negatively to, or I can put their mail in the spamfolder. It’s not like they are going to make any money by repeatedly mailing people who think their mail is spam anyway.
In many ways relying on the users to provide feedback is a good thing. The ISP gets a direct measure of what the recipients think without having to filter through a lot of obfuscation from bad senders.
Complaint percentages are measured by taking the number of “this is spam” clicks divided by the number of emails delivered to the inbox. Percentages under 0.3% usually result in reasonably good delivery, depending on other metrics. Percentages higher than 1% usually result in poor delivery, even if other metrics are good.
“This is not spam“ clicks
This measures how many people tracked down wanted mail delivered to the spam folder and tell the ISP the mail is not spam. These clicks are vital in reputation scores. Senders who are having intermittent bulk foldering are most affected by these types of stats. If your recipients don’t care enough about your mail to go into the bulk folder and find it, then the ISP believes that the mail is not necessarily wanted.
ISPs calculate the ratio’s differently and there is not a standard formula for how valuable a this is not spam click is.
The secret sauce
This is what distinguishes one ISPs filtering process from another. Each ISP and each spam filtering company has their own secret sauce. The things above are things that the ISPs have confirmed to me that they measure. I believe they also measure other things, including recipient profiles, recipient clicks, probably some stuff they won’t ever admit outside a development meeting. The secret sauce is also how they weight the different factors. ie, a this is spam click is not weighted the same as a this is not spam click. How important are complaints versus mail sent to dead addresses? How vital are spam traps? Some ISPs probably even have trusted reporter setups where people with good histories of accurately reporting spam have their reports weighted more heavily than unknown people.
The good news is, with the exception of the secret sauce, all of these factors are under control of the sender and senders can make changes to their mailing programs that will improve reputations and delivery.