One of the things I’ve been spending a lot of time thinking about lately is how we measure deliverability. Standard deliverability measurements include: opens, bounces, complaints, and clicks. There are also other tools like probe accounts, panel data, and public blocklists. Taken together these measurements and metrics give us an overall view of how our mail is doing. More and more...
How accurate are reports?
One of the big topics of discussion in various deliverability circles is the problems many places are seeing with delivery to Microsoft properties. One of the challenges is that Microsoft seems to be happy with how their filters are working, while senders are seeing vastly different data. I started thinking about reporting, how we generate reports and how do we know the reports are correct...
Targets and measures
Over the past few years a number of email delivery products have been launched. Many of these products are intended to improve deliverability by improving metrics. The problem is they don’t work the way their purchasers thing. Take data hygiene services. For the most part, these services take a list of email addresses, do data analysis and magic and then return a “clean” list to...
Improving Gmail Delivery
Lately I’m hearing a lot of people talk about delivery problems at Gmail. I’ve written quite a bit about Gmail (Another way Gmail is different, Gmail filtering in a nutshell, Poor delivery at Gmail but no where else, Insight into Gmail filtering) over the last year and a half or so. But those articles all focus on different parts of Gmail delivery and it’s probably time for a...
Engagement, Engagement, Engagement
I saw a headline today: New Research from Return Path Shows Strong Correlation Between Subscriber Engagement and Spam Placement I have to admit, my first reaction was “Uh, Yeah.” But then I realized that there are some email marketers who do not believe engagement is important for email deliverability. This is exactly the report they need to read. It lays out the factors that ISPs...
Ask Laura: What should we be measuring?
Dear Laura, We are trying to evaluate the success of our email programs, and I don’t have a good sense of what metrics we should be monitoring. We have a lot of data, but I don’t have a good sense of what matters and what doesn’t. Can you advise us what we should look at and why? Thanks, Metrics Are Hard Dear Metrically-Challenged, You’re not going to like this answer, but here goes. It depends...
Ask Laura: Does changing ESPs hurt deliverability?
Dear Laura, We’re a small ESP and as we onboard new clients, we often hear them ask “Why did I get better open rates with our previous provider? There has to be something wrong with your platform!” As part of the onboarding process, we meet with new clients to provide best practices and let them know they are building a reputation with ISPs on new IPs. We talk about how algorithms are...
February 2015 – The month in email
This was a short and busy month at WttW! We attended another great M3AAWG conference, and had our usual share of interesting discussions, networking, and cocktails. I recapped our adventures here, and shared a photo of the people who keep your email safe while wearing kilts as well. We also commended Jayne Hitchcock on winning the Mary Litynski award for her work fighting abuse and cyberstalking...
Meaningless metrics
I’ve been having some conversations with fellow delivery folks about metrics and delivery and bad practices. Sometimes, a sender will have what appear to be good metrics, but really aren’t getting them through any good practices. They’re managing to avoid the clear indicators of bad practices (complaints, SBL listings, blocks, etc), but only because the metrics aren’t good...
Metrics, metrics, metrics
I’ve been sitting on this one for about a week, after the folks over at IBM/Pivotal Veracity called me to tell me about this. But now their post is out, so I can share.
There are ISPs providing real metrics to senders: QQ and Mail.ru. Check out Laura Villevieille’s blog post for the full details.