I field a lot of delivery questions on various online fora. Often people try and anonymise what they’re asking about by abstracting out the question. The problem is that there are very few answers we can give in the abstract.
What are some examples of these types of questions?
- Should you always remove an address that hard bounces? Well, in general, yes. But there are a small number of cases where the hard bounce is a mistake on the part of the receiver and you shouldn’t remove that address.
- Should you send email to recipients who haven’t engaged in 3 years. Well, in general, no. But I’ve seen and managed campaigns to recipients much older than that. What are you really trying to do?
- If we limit our sending to people who’ve opted in to email, we’ll solve our spamtrap problem, right? Well, first, why do you think you have a spamtrap problem? If you’re Spamhaus listed, there’s a lot more you need to do. If you’re seeing one or two traps at the commercial sensor networks, then what’s your overall deliverability look like?
- Why would our mail suddenly start to go to bulk? Overall, it wouldn’t. What did you change? Did your website get compromised? Have you linked to a new image server? Did you publish a DMARC record? Did you mention a domain with a bad reputation?
- If we change the from address of our mail will it affect our deliverability? It can, but what from domain you’re talking about, what you’re changing it from and what you’re changing it to all matter before anyone can actually answer the question.
Deliverability is not a science. There are no hard and fast rules. Even the rules I wish were true, like only send opt-in mail, aren’t really hard and fast. A lot of folks get decent delivery using purchased or otherwise non-opt-in lists. I don’t like it, but I acknowledge it.
In order to get good deliverability advice for a situation the full situation needs to be described. History, specifics, IPs, and domains all matter. Where your email addresses came from and how you’ve maintained your database matters. It all matters. Abstracting out a question just means you get an abstract and generic answer, and that doesn’t help anyone.
Many deliverability folks stopped recommending publishing SPF records for the 5322.from address to get delivery to Microsoft. I even remember Microsoft saying they were stopping doing SenderID style checking. A discussion on the emailgeeks slack channel has me rethinking that.
It started out with one participant asking if other folks were seeing delivery improvement at MS if they added a SPF record for the 5322.from. Other folks chimed in and said yes, they had seen the same thing. Then I started digging and discovered that MS is still recommending SenderID records on their troubleshooting page.
Email sent to Outlook.com users should include Sender ID authentication. While, other forms of authentication are available, Microsoft currently only validates inbound mail via SPF and Sender ID authentication.
Microsoft Sender Support
The support page may be out of date, or it may not. In any case, it may be worth adding a SPF record to your 5322.from domain if you’re seeing persistent problems at Microsoft and no where else.
This isn’t great practice overall. But, it may explain why some folks are having such a hard time cracking the MS inbox.
Multiple times over the last few weeks folks have posted a screenshot of Google Postmaster tools showing some percentage of mail failing DMARC. They then ask why DMARC is failing. Thanks to how DMARC was designed, they don’t need to ask anyone this, they have all the data they need to work this out themselves.
The DMARC protocol contains a way to request reports when DMARC authentication fails. There are even two different kinds of reports: aggregate and per-message reports.
The major mailbox providers send aggregate DMARC reports. These reports show a summary of messages received using the domain’s from address and report all authentication failures for those messages. According to the specification aggregate reports should contain the following information:
- The DMARC policy discovered and applied, if any
- The selected message disposition
- The identifier evaluated by SPF and the SPF result, if any
- The identifier evaluated by DKIM and the DKIM result, if any
- For both DKIM and SPF, an indication of whether the identifier was in alignment
- Data for each Domain Owner’s subdomain separately from mail from the sender’s Organizational Domain, even if there is no explicit subdomain policy
- Sending and receiving domains
- The policy requested by the Domain Owner and the policy actually applied (if different)
- The number of successful authentications
- The counts of messages based on all messages received, even if their delivery is ultimately blocked by other filtering agents
Reports come in XML format, which is very difficult to read without some processing. But, anyone who is publishing DMARC records should have some way to read aggregate reports at a minimum. These are, to my mind, the actual useful piece of the protocol.
The other type of reports are per-message or forensic reports. These reports contain copies of the whole message. Last I heard there weren’t many providers sending forensic reports, which is fine because that has the ability to melt down servers from the sheer volume of mail.
Next time you see GPT with DMARC failures, go check your DMARC reports. That will tell you what’s failing and from where. If there’s a problem you’ll be able to tell and address it. No need to go ask anyone outside your organization or DMARC processor for help.
The nice folks over at Postmark shared a new deliverability resource last week. The SMTP Field Manual. This is a collection of SMTP responses they’ve seen in the wild. This is a useful resource. They’re also collecting responses from other senders, meaning we can crowdsource a useful resource for email deliverability folks.
This week there was a reported uptick in user unknown responses for verizon email addresses. The specific response folks were seeing was:
554 delivery error: dd email@example.com is no longer valid. [-20] -mta4047.aol.mail.bf1.yahoo.com
This appears to be a problem with their require-recipient-valid-since header checking. They are aware and are working on a fix.
What this means for senders is if you’re seeing Verizon addresses hard bounce, then you shouldn’t necessarily drop them from your list. They are likely still valid. You can continue to retry mail to those addresses. This does seem to have just affected the verizon.net addresses, not other addresses across the properties.
On the emailgeeks slack channel someone asked for advice about going to conferences. There were lots of great suggestions. I threw in the Pac Man Rule and realised a lot of folks haven’t heard of it before.
Eric Holscher created the Pac Man rule. The rule is really simple. If you’re standing in a group talking at a conference a circle can make it challenging for new people to join you. But, if you leave a space for another person to join you by standing in the shape of a Pac Man, you’re inviting new people to join.
Here’s a video explaining:
I’ve found this really useful and helpful both when I’ve been in conversations and when I’ve wanted to join conversations.
Friday the Tusli Gabbard campaign filed the expected first amended complaint against Google for suspending her adwords account immediately after the first Democratic debate. A full copy of the complaint is available.
First reading is that it’s only slightly better written than the first complaint. The document reads to me more like a policy statement than an actual lawsuit. Frankly, I’m about done with presidents and presidential campaigns that think they’re better than or above the rest of us citizens and that normal rules don’t apply to them. But, Tulsi appears to think being a presidential candidate means she gets special access and privileges.
140. Google has established a clear policy of using its power over speech to favor certain political viewpoints over others. For example, since June 2019, Google has used its unique control over political advertising and election speech to try to silence Tulsi Gabbard, a presidential candidate who has spoken out against Google.
141. But Tulsi will not be silenced. Google is trying to change the outcome of an American presidential election, and the government has been unwilling and unable to do anything about it. This action seeks to change that.
She’s also arguing that it’s unfair, so frightfully unfair, that the individuals working in the elections department at Google Ads support other candidates. The underlying implication being that she can only be treated fairly if her account manager is also a supporter of hers. I see this as a symptom of the incredibly polarised world we live in. Tulsi can’t believe anyone would treat her fairly unless they also support her for president.
My naive reading of the initial and first amended complaints leads me to believe that on the night of the first debate, the campaign tried to purchase a lot more ads than they had previously. This sudden increase in purchasing activity triggered some of Google’s anti-fraud detection algorithms and her adwords account was shut down temporarily. If I didn’t say it when I wrote about the first complaint, I’ll say it now: This is good behaviour. If something significant changes on an account, particularly a verified account belonging to a presidential candidate, then it should be shut down until the activity is verified.
I’ve heard some comments from friends and colleagues who are Google employees about the Podesta phish / DNC hack in the run up to the 2016 election. These comments, while vague and containing no details, lead me to think there was a significant internal push to make sure that Google would catch such types of compromises in the future.
Plus, I’m sure Google has limits on adwords account to make sure their customers aren’t surprised by excessive charges. Even verified accounts and even political accounts. The last thing they want is to hear is that the adwords bill won’t be paid because the charges weren’t authorised.
Then there’s this:
154. An actual controversy exists between the Campaign and Google as to whether Google’s policies and procedures, and their application thereof, violate the United States Constitution. The correct interpretation is that Google’s policies and procedures, facially and as applied, violate the Campaign’s speech and association rights under the United States Constitution
As long as I’ve been on the internet folks have been trying to argue that the first Amendment of the US constitution applies to private networks. I have yet to see any compelling argument that says it does. While there are bigger discussions to be had about the responsibilities of large internet providers, I don’t think they can or should be based on the first amendment.
I do think networks have a fundamental responsibility to stop abuse on their platform. All of the major networks, and many of the minor ones, have failed spectacularly at doing this. Just this weekend a Facebook friend shared this NYTimes article on the explosion of child sexual abuse material online (CW: child abuse and torture, this is a very difficult read) and how law enforcement and networks have utterly failed to effectively address the issue. There are days I think the Internet has contributed to more harm than good. Reading that article led to one of those days.
The FTC filed suit against Match.com for using fake accounts to entice people into signing up for accounts. (WA Post) Part of the FTC’s allegations include that Match flagged the accounts and prevented them from contacting paying Match users while simultaneously allowing the users to contact free Match users.
I’m actually surprised the FTC took action. I’m not surprised Match allowed, and possibly even encouraged, fraudulent accounts to send mail to registered users. The revenue they were making from the fraud was significant, according to Match’s own numbers.
Hundreds of thousands of consumers subscribed to Match.com shortly after receiving a fraudulent communication. In fact, Defendant has consistently tracked how many subscribers these communications have generated, typically by measuring the number of consumers who subscribe to Match.com within 24 hours of receiving an advertisement that touts a fraudulent communication. From June 2016 to May 2018, for example, Defendant’s analysis found that consumers purchased 499,691 subscriptions within 24 hours of receiving an advertisement touting a fraudulent communication. FTC Complaint (.pdf)
What doesn’t surprise me is that Match didn’t stop the outbound abuse. There are a lot of technology companies that will protect their own users and their own networks, while continuing to profit off of abuse of other networks. I’ve repeatedly talked with companies having delivery problems and pointed out that the fraud was a likely part of the delivery problems. I’ve rarely found any company that cared about fraud that was making them money.
A decade or so ago I was helping a client troubleshoot a Spamhaus listing. They, as many companies do, had a database with addresses from a number of different sources. Spamhaus was asking for them to reconfirm the entire database, which they didn’t want to do. I came up with the idea that if we had some sign of activity on the email address, like an open or a click and some other corresponding activity related to that open or click then we could assume that the address was likely a real user and was interested in the emails.
The reason Spamhaus asks for confirmed opt-in is because they want to make sure that the actual recipient of the email wants to receive it. The confirmed opt-in process wraps the grant of permission and the identification check together. But, I reasoned, if we can if we can meet both checks through a different process, then the addresses are confirmed.
In that case, and in the other cases where I’ve used open rates as part of the heuristics to fix Spamhaus, and other delivery, problems things worked out. Over time, I’ve improved my recommendations based on feedback from clients and filters. The confirmation criteria is more sophisticated and more accurate.
Much of my refinement is an effort to compensate for how inaccurate and unreliable open data is. We all think of open rates as a measure of when a user opens and email. But that’s not what is being measured. Instead, open tracking relies on an invisible pixel that is loaded if and when a user loads images in their email. Some, perhaps even most, people load images by default. But not everyone does. There are even some mail clients that can’t load images.
In addition to the mail client issue, there are cases where an image will be loaded without the user ever seeing the mail. In fact, I dealt with a client issue a few months ago where the client was seeing opens before the ESP delivered the mail. In this case, best we can tell, the filter rejected the message after collecting all of the data. A copy of the message was stored on the appliance and the links were checked by the filter. The links were clean, so when the sending server retried the mail, it was accepted and delivered to the user. I think things like these are going to become more common as filtering gets more sophisticated.
While I’ve been making my criteria for using open rates stricter I’ve watched more and more marketers and deliverability experts try and use open rates as a sign of permission. The underlying belief is that if recipients are opening our mail, they must want it. And, statistically and in general that may be the case. But there are cases where an open doesn’t actually translate to the recipient wants the mail.
Every email in the spam folder hurts your overall reputation. Continuing to send mail ending up in spam will decrease your overall delivery in the long term.
When dealing with deliverability issues, including Spamhaus listings, open rates can tell us a little bit about our data. We can use open rates, along with other data, to make sensible decisions about which email addresses belong to folks who want that mail.
What we can’t do is say that every email address that has opened an email has given us permission to send to them. We also can’t say that because we have decent open rates our overall data is good. That’s just not how email works. Yet, every day I see folks incorrectly drawing very specific correlations between opens and permission. I’ve even seen some companies on the SBL try and argue that they should be removed because their open rates are high. (Hint: it’s not a good argument and it doesn’t get them delisted.)
Open rates are an inexact and occasionally inaccurate measurement. When we use open rates we must remember these two facts. They are what we have and we’d be fools to ignore them as information. But we cannot continue to use them as more than what they are: estimates of how recipients are interacting with our mail.
One of the ongoing recommendations to improve deliverability is to send email that is timely and relevant to the recipient. The idea being that if you send mail a recipient wants, they’re more likely to interact with it in a way that signals to the mailbox provider that the message is wanted. The baseline for that, at least whenever I’ve talked about timely and relevant, is that the recipient asked for mail from you in the first place.
Permission is fundamental to a successful email marketing program. But, even now, there is an entire class of marketers that thinks that “cold email” can be part of a successful marketing program. It’s not and they know it. But the allure of spam is strong. The result is an entire marketing ecosystem that exists to facilitate legitimate spam.
This ecosystem includes different tools designed to make spam look relevant and timely. It includes things like:
- browser plugins that allow senders to harvest information off LinkedIn and blog websites;
- easy to design custom landing pages hosted at an unrealted-but-kinda-the-same domain;
- tools to allow you to send the spam out through Gmail or G Suite accounts (Currently Google is the 9th worst spam support ISP in the world according to Spamhaus);
These tools might, in very specific and unique circumstances, might make an email relevant to one or two recipients. But they will never make the message broadly relevant.
Over the last few weeks I’ve been taking screenshots and recording some of the “timely and relevant” spam I’ve been getting. This spam is all, clearly, a product of webscraping software of some kind. One example is this message received today from an Eastern European development company.
It is my pleasure to meet Women of Email’s profile on LinkedIn.
With this email, I would like to check whether you are looking for a software development partner?
With 300 software experts in Ukraine, MobiDev can help Women of Email to close the software development skills gap – from mobile/web development to innovative technologies implementation (AR, AI, IoT).
By the way, MobiDev team is coming to Dublin next month, so there is a chance to discuss details face to face. Please let me know if it is the right time to move our conversation forward?
Warm regards, Olena SlotskaHead of Business Development MobiDev
Screenshot of Email
For some tiny percentage of the recipients of this mail it’s possibly timely and relevant. The most likely case is a company makes the business decision that they’re looking to move into the software development space. All they need to do now is find a partner. And! Just like that, this email shows up in the right person’s mailbox. How likely is this, really?
Or, consider that the company’s current developers have just stopped answering the phone. They’ve decided they need replacement developers. Before they can go looking for replacements, they get this email. All their development problems are solved! A timely and relevant email.
I’m sure there are other situations where this email might be timely and might be relevant. But for the vast majority of us, it’s just spam.
Why does this matter? Because the mailbox providers are getting much, much better at correlating the cold email with the company behind it. They’re also getting less tolerant of companies that spam for customers and then try and “follow all the best practices” for permission based email. I am seeing more and more folks struggling with sudden onset delivery problems. I wonder how many of these cases are the result of the mailbox provider connecting the dots on their email programs.