Twisting information around

One of my mailing lists was asking questions today about an increase in invitation mailings from Spotify. I’d heard about them recently, so I started digging through my mailbox to see if I’d received one of these invites. I hadn’t, but it clued me into a blog post from early this year that I hadn’t seen before.
Research: ESPs might get you blacklisted.
That article is full of FUD, and the author quite clearly doesn’t understand what the data he is relying on means. He also doesn’t provide us with enough information that we can repeat what he did.
But I think his take on the publicly available data is common. There are a lot of people who don’t quite understand what the public data means or how it is collected. We can use his post as a starting off point for understanding what publicly available data tells us.
The author chooses 7 different commercial mailers as his examples. He claims the data on these senders will let us evaluate ESPs, but these aren’t ESPs. At best they’re ESP customers, but we don’t know that for sure. He claims that shared IPs means shared reputation, which is true. But he doesn’t claim that these are shared IPs. In fact, I would bet my own reputation on Pizza Hut having dedicated IP addresses.
The author chooses 4 different publicly available reputation services to check the “marketing emails” against. I am assuming he means he checked the sending IP addresses because none of these services let you check emails.
He then claims these 4 measures

give a representation of how an ESP operates.
[…] This includes whether it follows best principles and sends authenticated emails, unsubscribes Feed Back loop (FBL) complaints etc.

Well, no, not even a little bit.
The 4 measures he included are SenderScore, Sender Base, Trusted Source and MxToolbox’s blacklist checker. The first 4 are proprietary scores generated by commercial companies. Sender Base is a proprietary reputation scheme stream run by Cisco/Ironport. Trusted Source is a proprietary reputation evaluation run by McAfee.
In all cases, the scores are proprietary and are closely guarded secrets and we don’t know much about how they are generated. There are a few things I’m comfortable saying about them.
Scores reflect information provided by receiving mail servers. These scores are sometimes, but not always, applicable to receivers that use a different filtering system. Likewise, good senders can have poor scores and poor senders can have good scores.
In many of the scores volume plays an important role. Volume changes, whether up or down can cause unexpected and transient changes in scores.
Publicly available reputation scores don’t actually tell you that much about the policies of an ESP or the deliverability at a certain ESP.  Content is playing a bigger and bigger role in filtering at major ISPs, and good IP reputation scores aren’t sufficient to overcome bad content.
The only thing that actually tells you about delivery rates is: actually looking at your delivery rates.
The other source the author relied on to analyze deliverability is a scan of 100+ blocklists. He points out that some “ESPs” are listed and blocked by those blocklists. He never mentions which ESPs are listed, or which blocklists are listing them. There are a lot of published blocklists that are not very widely used, and many senders, ESP and otherwise, don’t notice or care. The time and energy to get delisted does nothing to improve delivery. So they just ignore it.
As we’ve demonstrated here recently, even listings on widely used lists are not sufficient to demonstrate poor practices on the part of the sender. Sometimes the blocklists are wrong.
So the author wrote an entire blog post about analyzing deliverability, without actually analyzing deliverability.
And, when he reported the results of his analysis, he left out all the relevant information that would allow us to repeat his analysis. We can’t look at the IP addresses (or the ESPs) that he used as samples because he reported neither bit of information. We can’t look at the blocklists that these IP addresses (or the ESPs) are listed on because he didn’t report the blocklists.
His delivery analysis is full of problems. Tomorrow we’ll look at the errant conclusions he drew from his “analysis.”

Related Posts

Blocklist BCP

As many of you may be aware there is a draft document working its way through the Internet Research Task Force (IRTF) discussing best common practices for blocklists. The IRTF is a parallel organization to the IETF and is charged with long term research related to the Internet. The Anti-Spam Working Group was chartered to investigate tools and techniques for dealing with spam.
Recently the ASRG posted a draft of a best practices document aimed at those running blocklists (draft-irtf-asrg-bcp-blacklists-07). This document has been under development for many years. The authors have used this document to share their experiences with running blocklists and their knowledge of what works and what doesn’t.
Best practices documents are never easy to write and consensus can be difficult. But I think that the authors did a good job capturing what the best practices are for blocklists. I do support the document in principle and, in fact, support many of the specific statements and practices outlined there. As with any best practices documents it’s not perfect but overall it reflects the current best practices for blocklists.
Ken Magill’s article about the BCP
Anti-Abuse buzz article about the BCP

Read More

Are blocklists always a good decision?

One of the common statements about blocklists is that if they have bad data then no one will use them. This type of optimism is admirable. But sadly, there are folks who make some rather questionable decisions about blocking mail.
We publish a list called nofalsenegatives. This list has no website, no description of what it does, nothing. But the list does what it says it does: if you use nofalsenegatives against your incoming mailstream then you will never have to deal with a false negative.
Yes. It lists every IP on the internet.
The list was set up to illustrate a point during some discussion many years ago. Some of the people who were part of that discussion liked the point so much that they continued to mention the list. Usually it happens when someone on a mailing list complained about how their current spamfiltering wasn’t working.
Some of the folks who were complaining about poor filtering, including ones who should know better, did actually install nofalsenegatives in front of their mailserver. And, thus, they blocked every piece of mail sent to them.
To be fair, usually they noticed a problem within a couple hours and stopped using the list.
This has happened often enough that it convinced me that not everyone makes informed decisions about blocking. Sure, these were usually small mailservers, with maybe a double handful of users. But these sysadmins just installed a blocklist, with no online presence except a DNS entry, without asking questions about what it does, how it works or what it lists.
Not everyone makes sensible decisions about blocking mail. Our experience with people using nofalsenegatives is just one, very obvious, data point.

Read More

Return Path speaks about Gmail

Melinda Plemel has a post on the Return Path blog discussing delivery to Gmail.

Read More