There are a lot of folks in the email industry that take issue with my stance that DMARC is not a viable solution to phishing. DMARC, at it’s absolute best, addresses one tiny, TINY piece of phishing.
Look at this message I received today. My mail client presents this as from Quickbooks and hides the actual from email address from me. Most mail clients do that by default. It is possible to change this in some clients, like desktop mail.app. But a lot of clients simply take the choice away from the user.
Mail clients are the biggest barrier to stopping phishing. As long as they hide the actual email address, users will be unable to tell when a message is actually phishing.
One of the things I do for clients is look at who is really handling mail for their subscribers. Steve’s written a nifty tool that does a MX lookup for a list of domains. Then I have a SQL script that takes the raw MX lookup and categorizes not by the domain or even the MX, but by the underlying mail filter.
Part of that script classifies domains hosted by Google apps as a separate filter from Gmail. Even though they’re actually all the same underlying system. I never had any real, definitive evidence that the filters were different. Just a lot of indirect evidence seeing mail delivered.
That changed today as I was checking delivery for a client. One of their mailstreams is getting 100% inboxing at Gmail, but 100% spam at Google Apps. That’s pretty clear evidence that Google Apps and Gmail are different filters.
I started looking at that mail in particular. Initially I noticed a feature of the subject line that looked like it may be something a business filter would trigger on. But, on looking deeper, there are other features that make it clear this is a different mail stream. What isn’t different is the From domain, the SPF domain or the DKIM signature.
In any case, this particular pattern makes it pretty clear that Google is specifically depositing this mail stream in the bulk folder of Google Apps users. Meanwhile the messages are going to the inbox at Gmail and all the other messages from this sender are going to the inbox at both places.
Google filters are specific and sensitive. They can identify different mail streams and target messages separately between Gmail and Google Apps.
In my previous career I was a molecular biologist. Much of my work was done on bacteria but after I left grad school, I ended up working in a developmental biology lab. Bacteria were (mostly) simple: just about every trait was controlled by a single gene. We could study what that gene did by removing it from the bacteria or adding it to a well characterised bacteria.
When I moved to developmental biology, the world got more complex. In higher organisms many traits are controlled by a whole bunch of genes, and there is a lot of redundancy and overlap and duplication. But, there was still quite a bit of removing a gene to see what happens. The lab I was in was specifically studying teratogens – chemicals that interfere with development. The most well known teratogen is thalidomide. In fact, a lot of the work we were doing with vitamin A and alcohol involved many of the same pathways that were disrupted by thalidomide.
One of the important parts of development is controlled by a complex of genes called Hox genes. These do a lot of things, but one of the most important things they do is define what parts of the embryo will become the front and back, the top and bottom and the near and far.
OK, now that we have 3 paragraphs of background, here’s the story. There was one seminar we went about Hox genes. The research being done was trying to assign specific activities to Hox genes by knocking them out. But, because Hox genes are so redundant, knocking one of them out doesn’t actually change much. There was nothing really wrong with the single knockouts this lab was studying. So, they ended up knocking out two Hox genes. At that point most things still worked, except… 2 vertebra switched places.
That story has always stuck with me, because, you have these genes that are so important they exist in everything from worms to humans. And they’re so vital that higher vertebrates like humans have the same set of genes duplicated across 4 different chromosomes. You knock out two of these vital developmental genes… and the only real evidence of anything happening is two vertebra switch places.
Recently I’ve been blogging about how to troubleshootdelivery problems. And I realised that a lot of how I treat delivery problems is influenced by my time in research. Much of how I troubleshoot starts with the premise that the things we’re testing aren’t independent variables. Everything, or almost everything, is conditional.
Email filtering, particularly that driven by machine learning, is closer to molecular biology than I realised. We can imagine each individual rule like it’s a gene. And these genes all work together and, in some cases, modify each other. Some rules don’t get activated unless another rule is active, or inactive. In some cases, one rule is so dominant none of the other rules matter. For instance, if an IP is listed on the SBL, your mail is blocked, no questions asked. But, if the sending IP isn’t listed, then hundreds of rules act on the message. Or, on the other end, if a user has a rule that says “always deliver this to my inbox” none of the rules matter, that message will always go to the inbox.
Filtering variables aren’t independent. In order to troubleshoot delivery problems, we need to start looking at the whole picture and the whole system. We can’t troubleshoot things in a vacuum.
Back in April I wrote about some poor marketing automation that ended up spamming me with ‘cart abandonment’ emails when the issue was the company’s credit card processing went down. That post has now been scraped by the spammers Moosend and they keep sending me… poorly targeted automated spam.
So, yes, Moosend (who are prolific and annoying spammers) are sending me spam about a blog post where I complain about automated spam. And, yes, I nuked the URLs out of the screenshot because no, they don’t get the free advertising.
I’m just waiting for the email, in 6 months time, where Moosend notices I mentioned their name on this post and start spamming me with pitches for this blog post. Because that would be funny. Annoying and still spammy, but at least it would make me laugh.
(Edit: I’ve apparently confused moosend.com and madberry.com, who are actually the prolific spammers that hit me with “hey, we want to write content for your blog” multiple times a week. This is the first set of spam I’ve gotten from moosend. However, they are actively scraping addresses off websites to send mail.)
There’s a lot of my education in the sciences that focused on how to get a statistically accurate sample. There’s a lot of math involved to pick the right sample size. Then there’s an equal amount of math involved to figure out the right statistical tests to analyse the data. One of the lessons of grad school was: the university has statistics experts, use them when designing studies.
Even in science not everything we test has to be statistically accurate. Sometimes we just want to get an idea if there is something here. Is there a difference between doing X and doing Y? Let’s do a couple pilot tests and see what happens. Is this a line of inquiry worth pursuing?
Much of my statistical knowledge comes from practice, not theory. Most of my advanced classes did have some stats, but I never actually took a statistics class. That leaves me in a strange position when listening to people talking about the testing they do. I know enough statistics to question whether their results are valid and meaningful. But I don’t know enough theory to actually dig down into the numbers and explain why.
In marketing, we do a lot of testing. We use the results of this testing to drive decisions. We call this data driven marketing. I know a lot of marketing departments and agencies do have statisticians and data scientists on hand.
I am sure, though, that some tests are poorly designed and incorrectly analysed. This bad data leads to poor decision making that leads to inconsistent or unexpected results. The biggest problem is, people who fail to go back and question if the data used to make the decision means what they think it does.
Email, and particularly filters, have a lot of non-repeatable elements. Gmail filters, for instance, adapt constantly. Without carefully constructed, controlled and repeated tests we’re never going to be able to tease out the specifics. The even bigger challenge is that the process of testing will, in and of itself, change the results. Run the same series of tests over and over again and the filters may adapt and act differently for test 11 than test 2.
Another piece that leads to poor decision making is thinking our preferences are representative of our audience. Even unconsciously, many of us design marketing programs that fit the way we like to be marketed to. In order to make good decisions, we need to question our own biases and think about what our audience wants.
Finally, there is a lot of value in looking at how people behave. One thing I’ve heard a lot from marketers over the years is that what people say they want is different from how they actually act.
Overall, to make good marketing decisions we can’t just collect random bits of data and use it to justify what we wanted to do anyway. The data always reflects the question we asked, but not always the question we wanted the answer to. Blindly using data, without thinking about our own biases, leads to poor outcomes.
what do you do next when the problem statement is as non-specific as “open rates are falling”? how would you go about getting from there to that next level of marketing email from our ESP goes to bulk?
That’s a great question, and will help me explain pieces I didn’t in the initial post.
The observation here is: “open rates are falling.”
We know that open rates are what happens when someone loads a 1×1 pixel in their mail client. There are delivery reasons why this will fail and there are non-delivery reasons why this will fail. We want to figure out why pixels aren’t loading.
What are common reasons pixel loads fail?
This message is too long and gmail is cutting off the 1×1 pixel.
There is something wrong with our tracking server
The mail isn’t making it to the inbox
The mail isn’t interesting enough for folks to open
(I really need a flow chart to make this pretty, but I’ll write it out here for now. Basically, I ask a question and then take next steps based on the answer. The questions are sorta ordered, but they don’t have to be asked and answered in this order).
Where are open rates falling? Is this happening everywhere or just at a few places? The answer here gives me another pathway to follow.
If, for instance, it’s only gmail then maybe this is just a longer message than normal and it’s being truncated. Not a delivery problem, but we should do a test and see if gmail is truncating the message.
If, for instance, it’s across all my recipients and it’s not isolated to a specific domain, then maybe my tracking server fell over. Not a delivery problem, but we should go talk to the web folks.
If, for instance, it’s at Yahoo, some cable companies, and AOL, then that may be a problem at the Yahoo domains. Let’s look deeper into those domains.
Are the open rates at the affected domains zero or just smaller than normal?
An open rate of zero suggests mail may be fully blocked or going to spam.
We can look at our SMTP logs to see if there are active blocks visible
We can do some tests to our own freemail accounts to see where mail is going for those addresses
A lower open rate suggests some mail is going to bulk.
Do we have an inbox monitoring tool available? Let’s add that to the next mailing so we can see where mail is delivered.
Can we run some tests to our own mailboxes to see where mail is delivered?
The idea here is we’re trying to determine what pathway to go down. Was this is something technical that caused the reporting to be prevented the pixel from loading and caused a false lower reading? Or was there something about the message that caused fewer recipients to actually open it? Or
The next set of questions aren’t so much about troubleshooting, but about other things I’d think about.
Is there any deviation from normal related to the mail client?
Could there be something about this template that corrupted the subject line and caused folks not to open it?
Are my tests going to the inbox?
Did I put any new domains or content in the email?
Was there a DNS problem that caused a temporary failure in authentication?
There are a multitude of reasons that open rates may fall. Fixing delivery problems, be they blocks or spamfoldering, can take extensive work and time to resolve. Before jumping too the conclusion that delivery is poor, figure out if there are other, easier to solve reasons, that explain a low open rate.
At the end of last year, Steve wrote a post about the different types of authentication. I thought I’d build on that and write about the costs associated with each type. While I know a lot of my readers are actually on the sending side, I’m also going to talk about the costs associated with the receiving side and a little bit about the costs for intermediaries such as CRM systems or ESPs.
Overall, SPF is a cheap technology to deploy for almost everyone. The type of authentication is prone to breakage from standard email processing, including forwarding.
SPF is very cheap to implement for senders. A couple hours, tops, for someone to identify what domains are used in the 5321.from address, what IPs they send from and to write a SPF record to cover them. The cost of maintaining SPF is minimal, with records only needing to be revisited when servers and domains change.
For receivers, there’s a little more cost involved in deployement but still reasonably cheap. The receiving system has to add code to do a DNS TXT lookup on the domain in the 5321.from and compare that to the IP address sending the mail. They then have to implement a decision pathway and email handling. But, overall, it’s not that expensive to do. The cost of maintaining this is minimal.
SPF requires at least one, sometimes more, DNS lookups incurring a small cost in computing resources.
In this case, the intermediaries may have the most expensive deployment process. Even then, the expense is mostly in documentation and publishing ranges for their customers to use. The cost of maintaining SPF is minimal.
The cost of deployment for DKIM increases over that of SPF although for most groups it’s not that much more. At the same time it provides a less fragile authentication than SPF.
Whether a sender runs their own MTA or is using a 3rd party deployment costs are pretty similar. The sender needs to generate a public/private key pair, install the private key on the sending server and publish the public key in DNS. The cost to doing this is low for senders using a MTA that understands DKIM.
For senders needing to upgrade their MTA, the cost is more, but really, DKIM has been around for more than a decade, it’s probably time to upgrade anyway. If a sender rotates their keys regularly (a recommended best practice (RFC5863, DMARC working group, MAAWG) this does incur a small cost.
DKIM requires a cryptographic hash to be computed for each email sent, which does incur a small cost in computing resources.
The cost to a receiver to deploy DKIM checking is similar to that of deploying SPF checking. There needs to be a way to check a DKIM signature at, or close to, the time of delivery. Then they need to have a clear idea of what happens when a signature passes or when it fails.
DKIM requires a DNS lookup for the public key and a cryptographic hash to be computed for each email received, incurring a small cost in computing resources.
As with SPF, intermediaries may actually have the most expensive deployment costs. They have all the same costs as a regular sender implementing signing including enabling DKIM signing, generating keys and publishing DNS records.
In addition to these costs, many intermediaries are expected to allow their own customers to sign with custom d= domains. This means creating processes for accepting private keys from customers, enabling signing with many different domains (and ensuring that the right key signs for the right customer), and having the ability to sign with their own domain as well. Furthermore, they need to have documentation and support for customers who want to do this.
Ongoing costs are minimal.
DMARC can be an expensive technology to deploy, for all parties.
In order for senders to deploy DMARC across their entire organisation, they need to do, at a minimum, the following:
Identify every outgoing source of email from their organisation;
Check to see if SPF or DKIM authentication uses the organisational domain for authentication;
Update any authentication that does not align to authentication that aligns;
Configure a reporting address to receive reports about email that fails DMARC;
Create processes to handle DMARC failure reports;
Publish a DMARC record;
Regularly review DMARC failure reports to identify legitimate sources of email failing DMARC.
If, for whatever reasons, the whole organisation can’t or won’t go to full DMARC protection, there is an abbreviated deployment. This involves using a subdomain for the DMARC authenticated mailstream separate from the domain used for email from the rest of the organisation. In this case, the steps are as follows:
Create a specific subdomain for DMARC authenticated email to use;
Set up SPF and DKIM authentication for that specific domain;
Configure a reporting address to receive reports about email that fails DMARC;
Create processes to handle DMARC failure reports;
Publish a DMARC record;
Regularly review DMARC failures to ensure legitimate mail is not failing DMARC.
DMARC deployment can be very expensive although it’s slightly less expensive to do a partial deployment on a subdomain.
Ongoing costs, even when everything is going fine, can also be substantial, particularly if the reporting scheme is simply paying one of the third parties to handle reports for you.
Companies receiving mail who want to participate in DMARC need to create a process where they evaluate incoming mail for alignment and act on it according to how that ISP interprets DMARC policy statements. This can involve significant costs.
If a company wants to provide DMARC reports back to senders, then they also need a way to generate the reports and send email based on the DMARC policy statements. Again, deployment can be expensive.
Ongoing costs vary, but are unlikely to be excessive.
Companies sending mail on behalf of other companies, have costs as well, but in the case of DMARC their costs aren’t higher than other groups. They simply need to ensure that their system is able to send customer mail that is aligned. Some intermediaries built in the ability to support custom SPF and DKIM when they initially deployed the technology. These companies had no extra costs when it came to supporting DMARC. Other companies did not, and had to build the systems and processes to support custom SPF and DKIM in order to support DMARC
When we look at the evolution of authentication, we see an increase in costs for each new type of authentication. One of the challenges with DMARC adoption is the cost. Overall, SPF and DKIM were cost effective for most companies to implement and they don’t cost too much to maintain. DMARC, though, is expensive and complicated to implement. A lot of the slow adoption is due to the cost and the lack of a clear and persistent benefit.
The next “stage” of authentication is BIMI, which is building on DMARC. BIMI is another technology that will be expensive to deploy, both for sending organisations and receiving organisations. Whether or not the benefits outweigh the costs remains to be seen.
Everyone has their own way of troubleshooting problems. I thought I would list out the steps I take when I’m trying to troubleshoot them.
Clarify the problem. As a consultant, folks come to me asking me to help them solve their delivery problems. My first step is to get them to clarify what symptoms they’re seeing. Something happened to make them contact me, and that’s where we start. Questions at this stage include:
Is mail going to bulk?
Is mail being rejected?
Where is mail rejected?
Are open rates steady?
What mail is affected?
Once you have the answers to the question, you have your problems statement. This is a one or two sentence summary that identifies the full issue. This is a step a lot of folks avoid doing, but a good problem statement drives the troubleshooting. Example problem statements:
Marketing email from our ESP goes to bulk at Gmail.
CRM email is seeing high numbers of temp failures at Yahoo.
Our domain is blocked by Filter provider.
Identify what are likely causes of the problem. Filters are pretty specific, so you want to change things that will actually make the filters do something different. Not just apply random best practices you’ve found on a website or blog (even this one). Common techniques, like sending to engaged users or removing bounced addresses don’t work everywhere, so you may end up weeks down the line with no improvement to show for it.
“Improving engagement” only works if the problem is bulk foldering at the three big mailbox providers. It does nothing anywhere else.
“List hygiene” only works if the problem is your list isn’t bounce handled (which is never, if you’re using a real ESP).
Implement specific changes to address the reason for the delivery problems. What was the problem and how do you back it out? “
Our sales dude decided to harvest addresses of a social networking site and add them to our newsletter list. We purged all those addresses (or all the addresses that clicked on a link that wasn’t unsubscribe) from our list.
There was a flaw in our data handling process and we reactivated addresses that had bounced off in the past We fixed the flaw and have removed the previously bounced addresses.
We changed our frequency and ended up sending too much mail, causing recipients to report the mail as spam. We backed down to our old volume and are being more selective about who gets the new, higher volume mailings.
Some of our sales folks have not been abiding by company directives and are using addresses they’ve ‘saved’ from previous campaigns but that should be suppressed. We’ve implemented technical steps to prevent them sending mail to these addresses again.
All of these are actual solutions clients have implemented over the years. They’re all specific and required a clear understanding of mail processes and data flow. Once we identified the problems, the solutions fell out the end, really.
Too often, though, delivery folks don’t actually ask the right question and they don’t actually take the time to identify the problem. Instead, they “implement best practices” and try and make random changes hoping something will change. Some of the time it works, often the best practice change will randomly hit on the underlying cause of the problem. But when the random best practice changes don’t work, you absolutely need to step back and start from the beginning.
Well, it’s 2020. The start of a new year and a new decade, or not depending on what number theory you use to count decades. Personally, I think we, as pattern loving humans, just happen to love numbers that end with 0 and we’re going to consider it special whether or not it’s the actual end or start of a decade.
This is the point in time where many blogs are doing year end (or, in this case decade-end) reviews. I have to admit, for me 2019 was a blur. We’re still not quite settled here in Dublin and it just doesn’t feel like a time for any kind of review or look back. Instead, we’re looking forward to what’s coming in 2020.
The move has provided us with opportunity to really consider what products and services we’re offering. Which are the ones meeting the needs of marketers and which are the ones that are tired and old. What new things can we provide to give marketers more insight and information into getting the mail to the inbox? We have some exciting things coming up in 2020 and are working hard to get them ready for prime time.
One of the big things is editing and updating our ISP information pages. Most of what is there was written early in the decade, and desperately needs to be updated. I’ve been working on that and will be updating pages throughout the beginning of the year.
I also hope to be traveling and speaking more this year. For the first time in more than a decade I did no business travel in 2019. I’m ready to get out again. Have a conference where you want a deliverability or email filter talk? Hit me up.
There are other plans in the works, too. We’re excited for 2020, and we hope you are too.
Some notes on some of the different protocols used for authentication and authentication-adjacent things in email. Some of this is oral history, and some of it may be contradicted by later or more public historical revision.
Associates an email with a domain that takes responsibility for it.
Originally Sender Permitted From, now Sender Policy Framework. It allows a domain owner to announce which IP addresses mail that uses a particular return path to be sent from, and whether a recipient should accept mail sent from other IP addresses as legitimate.
It authenticates the domain in the return path, not any hostname that’s visible to the recipient.
By allowing recipients to detect “probably forged” return paths, SPF allows recipient ISPs to avoid backscatter, by not sending asynchronous bounces in response to spam with forged return paths.
Secondarily, it provided a way to tie a message to domain name – the one hidden in the return path. Simple blocking of mail that violated SPF stopped a lot of spam, though not in a particularly sustainable way.
It allowed senders to take responsibility for mail they sent (authentication) and deny responsibility for mail they didn’t send (repudiation).
SPF on it’s own is primarily seen as a way for a domain to advertise which IP addresses it expects to send email from, and so to associate an email with a domain that takes responsibility for it. This allows the reputation of a mail stream to be monitored keyed on the associated domain.
It’s widely used for authentication, but use of SPF for repudiation (blocking mail that fails SPF with a -all result) has pretty much disappeared.
It’s also a major building block for DMARC. Having the return path and the domain in the visible From: header be “in the same domain” has additional benefits on it’s own as well as being part of the DMARC process.
SPF authentication is tied to the peer IP address of the SMTP transaction. That means that most sorts of forwarding – vanity domains, mailing lists etc. – will cause SPF to fail. There were attempts to mitigate that, by having the forwarder rewrite the return path (sender rewriting scheme) but as recipient ISPs moved to ignoring SPF failures as far as delivery decisions were concerned they didn’t make that much impact (but see DMARC and ARC).
Benefits and risks
An SPF pass is considered a positive sign, at least if the associated domain has some history. A missing or failed SPF check isn’t widely considered a negative (unless the sender has opted-in to it being used that way, by publishing DMARC records).
Most of the benefits of SPF require that you use a domain you control in your return path rather than one you don’t own, such as your ESP’s bounce domain. Most ESPs should be able to support that, but it will require delegating control to the ESP via adding DNS records to your domain.
It’s generally cheap to deploy for most senders and deploying it has no real risks, but management of it can get more complex when multiple mail streams are in use.
To be a “better” SPF, and one that authenticated based on the hostname visible to the recipient in the From: header. It had a new style policy record that started with “spf2.0/pra” rather than “v=spf1” but would fall back to the old-style v=spf1 records.
Microsoft still appear to check for traditional SPF records based on the domain in From: header, which is technically a SenderID check, but they say they don’t look for spf2.0/ style SenderID records just v=spf1 style SPF records.
Benefits and risks
Based on anecdotal evidence publishing an SPF record for the domain in the visible From: can improve deliverability at Microsoft properties.
There’s not really any risk to publishing that SPF record, whether accurate or not, and the only costs are the maintenance overhead and the (valuable) space it’s DNS record takes up in the root of a zone.
Publishing a spf2.0/ style SenderID record is probably pointless.
DKIM allows the sender of an email to attach a hostname to a message in a way that can be cryptographically validated by a recipient.
It authenticates the domain (often called the “d=”) in the DomainKey-Signature header, not any hostname that’s visible to the recipient.
DKIM was intended to allow a sender to take responsibility for an email via an attached domain name, allowing recipients to track reputation via that domain name rather than via, e.g., sending IP addresses.
DKIM is used much as it was intended, to associate a domain with an email.
It’s also a major building block for DMARC. Having the d= domain and the domain in the visible From: header be “in the same domain” has additional benefits on it’s own as well as being part of the DMARC process.
DKIM relies on a cryptographic signature of the body of the email and a – sender-chosen – subset of the email headers. If any of those are changed at all then the DKIM signature will be broken.
There are obvious ways that the content can be modified – mailing lists adding annotations to the subject line or footers to the body, for instance. But there are also a lot of subtle ways it can be modified. If mail is sent using an “unusual” structure – overly long lines, unwrapped headers, unusual content transfer encoding – then an intermediate mail system may “fix up” the mail in a way that doesn’t change the semantics of the message and which wouldn’t be visible changes to the recipient, but which will break the DKIM signature.
Benefits and risks
DKIM is much more robust against being broken in transit than SPF, and gives similar reputation advantages. It also allows participation in some feedback loops.
It is more complex to deploy, requiring emails to be cryptographically signed as they’re sent, but that’s well supported on current MTAs.
To get most of the advantages of DKIM you need to sign it with a domain you control rather than with a domain you don’t own, such as that of your ESP. Most ESPs should be able to support that, but it will require adding DNS records to your domain to delegate control to your ESP.
DMARC allows a domain owner to state that all mail they send with their domain in the visible From: header will by authenticated by them (via SPF or DKIM). It’s for repudiation, not authentication. It effectively changes the semantics of SPF and DKIM from having them being a positive signal to not having them being a (very) negative one.
It also allows a domain owner to request notifications about mail that appears to be sent from their domain but which isn’t correctly authenticated. This is critical to being able to check that you really are authenticating (most of) your email, and to do so before you ask recipients to discard potentially legitimate mail that’s not authenticated. This is the critical feature that distinguishes DMARC from previous attempts at email repudiation such as SSP, ADSP and SPF.
Brand protection and anti-phishing.
Mostly brand protection. Anti-phishing is still given lip service, and DMARC does mitigate phishing from the most naive phishers, but it’s not particularly effective against an adversary who’s learned to adapt in the years since DMARC began to be widely deployed.
It has also changed some of the semantics of SPF and DKIM authentication. DMARC introduced the idea of “DMARC-aligned” authentication, meaning that the return path used by SPF or the d= used by DKIM is “in the same domain as” the email address in the From: field. Many ISPs will provide preferential treatment to email that is authenticated in a DMARC-aligned way, even if the domain is not publishing DMARC records.
Benefits and risks
Using DMARC in “reporting only” mode, either “p=none” or “p=quarantine pct=0” is an extremely useful tool for mapping out your mail flows and finding sources of legitimate email that aren’t correctly authenticated. This is useful in itself, as well as being an essential step towards DMARC enforcement. It will potentially generate a lot of data, though, and you’ll need to budget for infrastructure and personnel time to handle and analyze those reports. There are very few deliverability or usability risks in this mode, though there are a few mailing lists which will modify their behaviour in a potentially user-surprising way for users of your domain.
To deploy DMARC in enforcing mode effectively requires deploying SPF and DMARC everywhere as a prerequisite (while DMARC only requires that either SPF or DKIM passes the two authentication approaches are fragile and will occasionally break, so you want to have them both in place to minimize the risk of DMARC failing). It also requires ongoing management and monitoring to ensure that authentication hasn’t stopped working or a new mailstream has been deployed without it.
DMARC in enforcing mode is likely to reduce your deliverability rather than increase it, as anything unexpected in mail flow – whether it be a mistake on the senders part, a mistake by the receiver ISP or some sort of forwarding – will cause mail to be lost. In theory an ISP may be using the existence of enforcing DMARC as a positive signal but that’s probably far outweighed by the positives of the DKIM and SPF you had to put in place to get there. (There’re a lot of “soft” effects that DMARC might conceivable have indirectly on recipient behaviour that could have an impact, but until someone independent does real research on that speculation about it is a bit ineffable.)
For domains that are entirely dedicated to sending bulk mail, and which have no humans sending mail with normal mail clients the overhead of deploying DMARC can be tiny, especially if use of DMARC is baked in from the beginning. For more complex domains, or for retrofitting DMARC to a domain that is already in use the effort required to deploy DMARC while being sure that ongoing operations aren’t going to be impacted can be significant, potentially requiring months and person-years of work.
Enforcing DMARC is a prerequisite for BIMI.
Naive or uncaring use of DMARC breaks mailing lists and forwarding. ARC fixes some cases of that.
ARC is Authenticated Received Chain. It allows mail forwarders to communicate whether the mail they’re forwarding was authenticated before they forwarded it.
It’s standardized, it’s being deployed by receiver ISPs and forwarders. It seems to do what it says on the box.
Benefits and risks
It’s something that most people don’t have to care about, other than knowing that by making forwarding of authenticated mail more robust it mitigates some of the risks of DMARC.
BIMI lets a sender have their logo displayed next to their email in the inbox at some email providers.
To allow email senders to loot their companies marketing budget to pay for the costs of deploying DMARC.
No, really. But it’s probably a good thing.
BIMI allows whitelisted senders to display an image next to their email if their mail passes DMARC and they’re vetted in some manner. There’s not really any technical reason for BIMI to require DMARC (as opposed to DKIM and SPF), but it’s a good carrot that mailbox providers can use to encourage senders to deploy DMARC.
Gradually being rolled out at some providers.
Benefits and risks
Data on whether having the brands logo displayed next to an email helps with recipient trust or avoiding phishing has been mixed. There’s no real downside, though, beyond the risks of deploying DMARC and the costs of being vetted.
If you already have DMARC in place you should consider BIMI. If you’re considering deploying DMARC you should probably include BIMI as part of that proposal.
STARTTLS is an extension to SMTP that allows mailservers to exchange mail over an encrypted channel rather than as plain text.
Opportunistic STARTTLS is widely supported by receiver ISPs. It’s used for transport security, protecting traffic from passive interception during delivery.
Benefits and risks
Much as most of the web is quietly moving from unencrypted http to TLS protected https the same is true of email. Supporting it when sending email is typically just a configuration setting, and there’s no real drawback from turning it on beyond a slight increase in CPU usage (which is unlikely to be a limiting factor for delivery rates anyway).
Google have stated that they believe in TLS everywhere, to the extent that they provide search benefits to sites that offer it. It’s safe to assume that they’d like to see TLS on their inbound email too.
Not exactly authentication, rather a way to say “this domain doesn’t accept email”.
Ideally when you’re sending email you’d look up the MX records of the recipient in DNS, then send to one of those. If there were no MX records you wouldn’t send the email. But for backwards compatibility with the internet of the 1980s you also have to check for an A record if there are no MX records.
If a domain doesn’t want email, but does want a webserver, they’ll publish an A records for the webserver and so senders will try and deliver mail there.
Null MX is a formal way to publish MX records saying “don’t even bother trying to send mail”.
Universally supported, and many smarthosts will special-case it so as to not even try to send the mail and immediately suppress recipients at those domains without any retry attempts.
Repudiated Mail From, Mail Transmitter, RMX, DMP
These were all early attempts, starting in the late 80s but crystalizing in the early 90s, to identify legitimate return paths in email. They evolved into SPF.
DomainKeys, Identified Internet Mail
DomainKeys was developed by Yahoo, and Identified Internet Mail by Cisco to verify the source of email. Rather than having two competing standards the authors merged them to give DKIM.
DomainKeys had an outbound signing policy that added repudiation of the From: field, much as DMARC does to DKIM/SPF.
SSP, ASP and ADSP
DKIM removed DomainKeys outbound signing policy, avoiding tying all the operational problems with it from the DKIM standardization process by deferring it to a separate standard.
That standard was called variously SSP (Sender Signing Practices), ASP (Author Signing Practices) and ADSP (Author Domain Signing Practices). They were standardized in 2009, but abandoned due to being unused. The experience from that experiment informed DMARC’s development.