While answering a question about how to improve IP reputation at Gmail I realized that I no longer treat Gmail opens as anything about how a user is interacting with email. There are so many cases and ways that a pixel load can be triggered, without the user actually caring about the mail that it’s not a measure of the user at all.
That doesn’t mean opens are useless. In fact, they’re very useful. But only if you have the full picture.
Gmail, and other consumer mailbox providers, do not allow images to load for messages in the bulk folder.
Gmail, and other consumer mailbox providers, do some level of individualised delivery. Even if most of a particular mailing is going to the bulk folder, individual users may still get that email in their inbox.
Every message delivered to the spam folder, whether marked as spam by the user or delivered there by the mailbox provider, hurts your reputation.
Every email in the spam folder hurts your overall reputation. Continuing to send mail ending up in spam will decrease your overall delivery in the long term.
One of the ways to improve reputation is to remove anything that is hurting your reputation. This means, removing any emails going to the bulk folder. How do we know which emails are going to the bulk folder? One piece of data is an image was loaded, i.e. an open was recorded. That open won’t happen if mail is in bulk.
How far back we go to remove addresses is an interesting question. I can argue all sorts of timelines. it doesn’t really matter. I’ve seen reputation improvement using just a few thousand emails that we knew were going to the inbox.
The real signal is not that you perfectly remove every address receiving mail in the bulk folder, but that you remove the majority of addresses receiving mail in the bulk folder. Want to go back a year? Sure. 18 months? yeah, probably will work. Longer, well, what’s the likelihood those addresses have been abandoned and no longer have an active user logging in and looking at data?
Once reputation is repaired, you can start to mail some of the suspended folks on your lists. But, stopping mail that is actively hurting your reputation is always the first step. Think about it, if you could remove spamtraps from your lists, wouldn’t you? Mail going to the spam folder can damage your delivery just as much as spamtraps.
A frequent question in a number of deliverability spaces is how to tell if a message is transactional or marketing. In most cases the decision is related to whether or not to respect an unsubscribe request. All too often companies decide that their messages are too important to allow someone to opt-out of. The problem is, in some cases, there is no longer a customer relationship to send notices about.
This came up because it’s been just about a year now since I unsubscribed from most of my US based commercial lists. (Yes, we’ve been in Dublin more than a year now!). Because it’s been a year I’m getting a lot of “transactional” messages. Many of them are reminding me to log into my account to see my current rewards level. Others are offering me coupons if I come back.
These aren’t transactional messages, they have nothing to do with any transactions I’ve made. They’re spam, plain and simple. They violate CAN SPAM, as I have opted out from mail from those companies. I’ve not made any purchases from those companies in more than a year.
But, I’ve seen first hand how marketing departments justify emails like this as ‘transactional.’ This is also the time where they start talking about the 80/20 rule that some spammer made up to justify calling their marketing mail spam. Then they don’t let you actually opt-out of this message because they’re “account messages.”
I can’t actually affect delivery of this kind of mail as I’m not at one of the commercial providers. But, if I were, you can bet I’d be reporting each and every one of these messages as spam. Even if it only taught the provider to put this mail in my spam folder, it’s still better than getting mail that I have asked to no longer receive but am still receiving.
Seriously, marketers, at some point you’re going to have to stop biting the apple. When recipients tell you to stop mailing them, take that as an instruction that you should stop mailing them. Continuing to mail people who’ve opted out only injures your reputation and your overall delivery.
I field a lot of delivery questions on various online fora. Often people try and anonymise what they’re asking about by abstracting out the question. The problem is that there are very few answers we can give in the abstract.
What are some examples of these types of questions?
Should you always remove an address that hard bounces? Well, in general, yes. But there are a small number of cases where the hard bounce is a mistake on the part of the receiver and you shouldn’t remove that address.
Should you send email to recipients who haven’t engaged in 3 years. Well, in general, no. But I’ve seen and managed campaigns to recipients much older than that. What are you really trying to do?
If we limit our sending to people who’ve opted in to email, we’ll solve our spamtrap problem, right? Well, first, why do you think you have a spamtrap problem? If you’re Spamhaus listed, there’s a lot more you need to do. If you’re seeing one or two traps at the commercial sensor networks, then what’s your overall deliverability look like?
Why would our mail suddenly start to go to bulk? Overall, it wouldn’t. What did you change? Did your website get compromised? Have you linked to a new image server? Did you publish a DMARC record? Did you mention a domain with a bad reputation?
If we change the from address of our mail will it affect our deliverability? It can, but what from domain you’re talking about, what you’re changing it from and what you’re changing it to all matter before anyone can actually answer the question.
Deliverability is not a science. There are no hard and fast rules. Even the rules I wish were true, like only send opt-in mail, aren’t really hard and fast. A lot of folks get decent delivery using purchased or otherwise non-opt-in lists. I don’t like it, but I acknowledge it.
In order to get good deliverability advice for a situation the full situation needs to be described. History, specifics, IPs, and domains all matter. Where your email addresses came from and how you’ve maintained your database matters. It all matters. Abstracting out a question just means you get an abstract and generic answer, and that doesn’t help anyone.
Many deliverability folks stopped recommending publishing SPF records for the 5322.from address to get delivery to Microsoft. I even remember Microsoft saying they were stopping doing SenderID style checking. A discussion on the emailgeeks slack channel has me rethinking that.
It started out with one participant asking if other folks were seeing delivery improvement at MS if they added a SPF record for the 5322.from. Other folks chimed in and said yes, they had seen the same thing. Then I started digging and discovered that MS is still recommending SenderID records on their troubleshooting page.
Email sent to Outlook.com users should include Sender ID authentication. While, other forms of authentication are available, Microsoft currently only validates inbound mail via SPF and Sender ID authentication.
Multiple times over the last few weeks folks have posted a screenshot of Google Postmaster tools showing some percentage of mail failing DMARC. They then ask why DMARC is failing. Thanks to how DMARC was designed, they don’t need to ask anyone this, they have all the data they need to work this out themselves.
The DMARC protocol contains a way to request reports when DMARC authentication fails. There are even two different kinds of reports: aggregate and per-message reports.
The major mailbox providers send aggregate DMARC reports. These reports show a summary of messages received using the domain’s from address and report all authentication failures for those messages. According to the specification aggregate reports should contain the following information:
The DMARC policy discovered and applied, if any
The selected message disposition
The identifier evaluated by SPF and the SPF result, if any
The identifier evaluated by DKIM and the DKIM result, if any
For both DKIM and SPF, an indication of whether the identifier was in alignment
Data for each Domain Owner’s subdomain separately from mail from the sender’s Organizational Domain, even if there is no explicit subdomain policy
Sending and receiving domains
The policy requested by the Domain Owner and the policy actually applied (if different)
The number of successful authentications
The counts of messages based on all messages received, even if their delivery is ultimately blocked by other filtering agents
Reports come in XML format, which is very difficult to read without some processing. But, anyone who is publishing DMARC records should have some way to read aggregate reports at a minimum. These are, to my mind, the actual useful piece of the protocol.
The other type of reports are per-message or forensic reports. These reports contain copies of the whole message. Last I heard there weren’t many providers sending forensic reports, which is fine because that has the ability to melt down servers from the sheer volume of mail.
Next time you see GPT with DMARC failures, go check your DMARC reports. That will tell you what’s failing and from where. If there’s a problem you’ll be able to tell and address it. No need to go ask anyone outside your organization or DMARC processor for help.
The nice folks over at Postmark shared a new deliverability resource last week. The SMTP Field Manual. This is a collection of SMTP responses they’ve seen in the wild. This is a useful resource. They’re also collecting responses from other senders, meaning we can crowdsource a useful resource for email deliverability folks.
This week there was a reported uptick in user unknown responses for verizon email addresses. The specific response folks were seeing was:
554 delivery error: dd firstname.lastname@example.org is no longer valid. [-20] -mta4047.aol.mail.bf1.yahoo.com
This appears to be a problem with their require-recipient-valid-since header checking. They are aware and are working on a fix.
What this means for senders is if you’re seeing Verizon addresses hard bounce, then you shouldn’t necessarily drop them from your list. They are likely still valid. You can continue to retry mail to those addresses. This does seem to have just affected the verizon.net addresses, not other addresses across the properties.
On the emailgeeks slack channel someone asked for advice about going to conferences. There were lots of great suggestions. I threw in the Pac Man Rule and realised a lot of folks haven’t heard of it before.
Eric Holscher created the Pac Man rule. The rule is really simple. If you’re standing in a group talking at a conference a circle can make it challenging for new people to join you. But, if you leave a space for another person to join you by standing in the shape of a Pac Man, you’re inviting new people to join.
Here’s a video explaining:
I’ve found this really useful and helpful both when I’ve been in conversations and when I’ve wanted to join conversations.
Friday the Tusli Gabbard campaign filed the expected first amended complaint against Google for suspending her adwords account immediately after the first Democratic debate. A full copy of the complaint is available.
First reading is that it’s only slightly better written than the first complaint. The document reads to me more like a policy statement than an actual lawsuit. Frankly, I’m about done with presidents and presidential campaigns that think they’re better than or above the rest of us citizens and that normal rules don’t apply to them. But, Tulsi appears to think being a presidential candidate means she gets special access and privileges.
140. Google has established a clear policy of using its power over speech to favor certain political viewpoints over others. For example, since June 2019, Google has used its unique control over political advertising and election speech to try to silence Tulsi Gabbard, a presidential candidate who has spoken out against Google. 141. But Tulsi will not be silenced. Google is trying to change the outcome of an American presidential election, and the government has been unwilling and unable to do anything about it. This action seeks to change that.
She’s also arguing that it’s unfair, so frightfully unfair, that the individuals working in the elections department at Google Ads support other candidates. The underlying implication being that she can only be treated fairly if her account manager is also a supporter of hers. I see this as a symptom of the incredibly polarised world we live in. Tulsi can’t believe anyone would treat her fairly unless they also support her for president.
My naive reading of the initial and first amended complaints leads me to believe that on the night of the first debate, the campaign tried to purchase a lot more ads than they had previously. This sudden increase in purchasing activity triggered some of Google’s anti-fraud detection algorithms and her adwords account was shut down temporarily. If I didn’t say it when I wrote about the first complaint, I’ll say it now: This is good behaviour. If something significant changes on an account, particularly a verified account belonging to a presidential candidate, then it should be shut down until the activity is verified.
I’ve heard some comments from friends and colleagues who are Google employees about the Podesta phish / DNC hack in the run up to the 2016 election. These comments, while vague and containing no details, lead me to think there was a significant internal push to make sure that Google would catch such types of compromises in the future.
Plus, I’m sure Google has limits on adwords account to make sure their customers aren’t surprised by excessive charges. Even verified accounts and even political accounts. The last thing they want is to hear is that the adwords bill won’t be paid because the charges weren’t authorised.
Then there’s this: 154. An actual controversy exists between the Campaign and Google as to whether Google’s policies and procedures, and their application thereof, violate the United States Constitution. The correct interpretation is that Google’s policies and procedures, facially and as applied, violate the Campaign’s speech and association rights under the United States Constitution
As long as I’ve been on the internet folks have been trying to argue that the first Amendment of the US constitution applies to private networks. I have yet to see any compelling argument that says it does. While there are bigger discussions to be had about the responsibilities of large internet providers, I don’t think they can or should be based on the first amendment.
I do think networks have a fundamental responsibility to stop abuse on their platform. All of the major networks, and many of the minor ones, have failed spectacularly at doing this. Just this weekend a Facebook friend shared this NYTimes article on the explosion of child sexual abuse material online (CW: child abuse and torture, this is a very difficult read) and how law enforcement and networks have utterly failed to effectively address the issue. There are days I think the Internet has contributed to more harm than good. Reading that article led to one of those days.
The FTC filed suit against Match.com for using fake accounts to entice people into signing up for accounts. (WA Post) Part of the FTC’s allegations include that Match flagged the accounts and prevented them from contacting paying Match users while simultaneously allowing the users to contact free Match users.
I’m actually surprised the FTC took action. I’m not surprised Match allowed, and possibly even encouraged, fraudulent accounts to send mail to registered users. The revenue they were making from the fraud was significant, according to Match’s own numbers.
Hundreds of thousands of consumers subscribed to Match.com shortly after receiving a fraudulent communication. In fact, Defendant has consistently tracked how many subscribers these communications have generated, typically by measuring the number of consumers who subscribe to Match.com within 24 hours of receiving an advertisement that touts a fraudulent communication. From June 2016 to May 2018, for example, Defendant’s analysis found that consumers purchased 499,691 subscriptions within 24 hours of receiving an advertisement touting a fraudulent communication. FTC Complaint (.pdf)
What doesn’t surprise me is that Match didn’t stop the outbound abuse. There are a lot of technology companies that will protect their own users and their own networks, while continuing to profit off of abuse of other networks. I’ve repeatedly talked with companies having delivery problems and pointed out that the fraud was a likely part of the delivery problems. I’ve rarely found any company that cared about fraud that was making them money.