Major industry news today as Proofpoint and Cloudmark announced a major acquisition deal. Proofpoint agreed to pay $110 million in cash to acquire Cloudmark. Prior to this acquisition, Proofpoint focused on business filters. Cloudmark’s focus was selling into large ISPs, including large cable providers, and mobile carriers. Proofpoint assured investors they will continue to supporting and developing the Cloudmark filters. At the same time they’re incorporating the Cloudmark Global Threat Network into their Nexus platform.
A few things came to mind when I saw the announcement.
Both companies focused on different types of email filtering. Proofpoint developed products for business, building filters that address spam but they did a lot more. Many of the filter features have nothing to do with blocking mail, but instead focus on other business critical functions like protecting intellectual property and maintaining compliance with various laws and regulations. Cloudmark, on the other hand, created filters that businesses could deploy to protect consumers as well as use in their business
With this acquisition we’re starting to see a consolidation of functionality. The distance between business filters and consumer filters continues to close.
Filtering isn’t just about spam, though.
This acquisition improves Proofpoint’s ability to filter things other than spam. Their announcement specifically calls out spear phishing and business email compromise (BEC) as problems. They are. Criminals steal billions of dollars from businesses through email attacks. These same types of attacks were employed in the 2016 US elections against candidates and parties.
It feels like we’re embarking on a new phase of security and compliance. Those tools we built to deal with spam and protect the internet from abuse generally worked. Our mail infrastructure isn’t falling down due to spam. Now we need to look forward to handling different kinds of abuse. The same people who stepped up to the plate in the early 2000’s to address spam are now looking at how to protect individuals online.
It’s a nice internet we’ve got here. Let’s see if we can keep it.
October was a busy month. In addition to on boarding multiple new clients, we got new desks, I went to Toronto to see M3AAWG colleagues for a few days, and had oral surgery. Happily, we’re finally getting closer to having the full office setup.
What is an office without a Grover Cat? (he was so pleased he figured out how to get onto it at standing height).
All of this means that blogging was pretty light this month.
One of the most interesting bits of news this month is that the US National Cybersecurity Assessments & Technical Services Team issued a mandate on web and email security, which Steve reviewed here.
In best practices, I made a brief mention about the importance of using subdomains rather than entirely new domain names in links and emails and even DKIM keys.
We’ve talked about engagement-based filters before, but it’s interesting to note how they’re being used in business environments as well as consumer environments.
We also put together a survey looking at how people use Google Postmaster Tools. The survey is now closed, and I’ll be doing a full analysis over the next couple of weeks, as well as talking about next steps. I did a quick preview of some of the highlights earlier this week.
Finally, a lot of industry news this month: Most notably, Mailchimp has changed its default signup process from double opt-in to single opt-in. This caused quite a bit of sturm und drang from all ends of the industry. And, in fact, a few days later they announced the default double-opt-in would stay in place for .eu senders. I didn’t get a chance to blog about that as it happened. In other news, the Road Runner FBL is permanently shuttered, and Edison Software has acquired Return Path’s Consumer Insight division. Also worth noting: Microsoft is rolling out new mail servers, and you’ll likely see some new — and potentially confusing — error codes.
My October themed photo is behind a cut, for those of you who have problems with spiders.
Back in the dark ages (the late ’90s) most people used dialup to connect to the internet. Those people who had broadband could run all sorts of services off them, including websites and mail servers and such. We had a cable modem for a while handling mail for blighty.com.
At that time blighty.com had an actual website. This site hosted some of the very first online tools for fighting abuse and tracking spam. At the same time, both of us were fairly active on USENET and in other anti-spam fora. This meant there were more than a few spammers who went out of their way to make our lives difficult. Sometimes by filing false complaints, other times by actually causing problems through the website.
At one point, they managed to get a complaint to our cable provider and we were shut off. Steve contacted their postmaster, someone we knew and who knew us, who realized the complaint was bogus and got us turned back on. Postmaster also said he was flagging our account with “the blighty flag” that meant he had to review the account before it would be turned off in the future.
I keep imagining the blighty flag looking like this in somebody’s database.
That is to say, sometimes folks disable accounts they really shouldn’t be disabling. Say, for instance:
This was an accident by a twitter employee, according to a post by @TwitterGov
Twitter needs a blighty flag.
I closed the Google Postmaster Tools (GPT) survey earlier today. I received 160 responses, mostly from the link published here on the blog and in the M3AAWG Senders group.
I’ll be putting a full analysis together over the next couple weeks, but thought I’d give everyone a quick preview / data dump based on the analysis and graphs SurveyMonkey makes available in their analysis.
Of 160 respondents, 154 are currently using GPT. Some of the folks who said they didn’t have a GPT account also said they logged into it at least once a day, so clearly I have some data cleanup to do.
57% of respondents monitored customer domains. 79% monitored their own domains.
45% of respondents logged in at least once a day to check. Around 40% of respondents check IP and/or domain reputation daily. Around 25% of respondents use the authentication, encryption and delivery errors pages for troubleshooting.
10% said the pages were very easy to understand. 46% said they’re “somewhat easy” to understand.
The improvements suggestions are text based, but SurveyMonkey helpfully puts them together into a word cloud. It’s about what I expected. But I’ll dig into that data.
10% of respondents said they had built tools to scrape the page. 50% said they hadn’t but would like to.
In terms of the problems they have with the 82% of people said they want to be able to create alerts, 60% said they want to add the data to dashboards or reporting tools.
97% of respondents who currently have a Google Postmater Tools account said they are interested in an API for the data. I’m sure the 4 who aren’t interested won’t care if there is one.
47% of respondents said if there was an API they’d have tools using it by the end of 2017. 73% said they’d have tools built by end of Q1 2018.
33% of respondents send more than 10 million emails per day.
75% of respondents work for private companies.
70% of respondents work for ESPs. 10% work for retailers or brands sending through their own infrastructure.
That’s my initial pass through the data. I’ll put together something a bit more coherent and some more useful analysis in the coming week and publish it. I am already seeing some interesting correlations I can do to get useful info out.
Thank you to everyone who participated! This is interesting data that I will be passing along to Google. Rough mental calculation indicates that respondents are responsible for multiple billions of emails a day.
I’ll be closing down the Google Postmaster Tools survey Oct 31. If you’ve not had a chance to answer the questions yet, you have through tomorrow.
This data will be shared here. The ulterior motive is to convince Google to make an API available soon due to popular demand.
As of October 31, 2017 signup forms and popup boxes provided by Mailchimp will no longer default to a double / confirmed opt-in process.
Starting October 31, single opt-in will become the default setting for all MailChimp hosted, embedded, and pop-up signup forms.
This announcement was made earlier today in their newsletter and has been spreading like wildfire around the email community.
Of course, everyone has their opinion on why, including me. I haven’t talked to anyone over there about this, but I suspect this relates to the listbombing issue.
I expect that part of their response to subscription bombing was to look at their subscription forms and harden them against abuse. But, as they were looking at it, they also started thinking about the COI process and how COI itself could be used as an attack vector.
The result is removing the COI component from their default forms. Customers who want or need to continue to use COI can enable that option on their setting page.
I feel like I’ve blogged a lot about COI in the past but looking through old posts I can’t actually find many posts on it. (COI: an old topic resurrected, Sledgehammer of COI). There’s a reason for that, COI is a tool and is useful in some circumstances. But it’s not THE solution to deliverability problems.
The discussions around this change have been interesting.
From my perspective, this is not a huge change. No one who used Mailchimp was forced into using COI. There were always ways to work around the default. It makes it easier for some of their customers to run single opt-in mailing lists but it’s only one ESP changing their policies.
I am in the minority thinking this isn’t a big deal. The rest of the industry is full of speculation about this change.
Some compliance and abuse people worry that Mailchimp has gone to spam side. (I doubt it.) Other people liked being able to point at Mailchimp as an example of COI being a best practice and now they can’t. (Well, yeah, time for a better narrative.)
Marketers speculated financial pressures and loss of customers drove this change. (I doubt it, it wasn’t that long t they drove customers off Mandrill.) Others are happy MC “got with the times.” (Uh, they’re actually ahead of a lot of folks in seeing patterns and innovating.)
Whatever the reason, it’s a pretty big change in policy for Mailchimp. But I don’t expect to see more spam from their networks. They’re still going to keep their customers as clean as possible.
EDIT: On Oct 30, Mailchimp announced that the default for .eu customers would continue to be double opt-in to facilitate their compliance with GDPR.
Road Runner is no longer providing a FBL starting today. Earlier this morning a couple ESPs were reporting a decrease in FBL messages from the RR FBL. A few hours later, a senior technical account manager confirmed on mailop that the FBL was ending today.
While the announcement says that folks can expect reports to trickle, at least one ESP has reported zero reports today.
Thus ends 2 hours of rampant speculation, emails, and gossip among the deliverability community. We can all go back to work now.
The US National Cybersecurity Assessments & Technical Services Team have issued a mandate on web and email security, including TLS+HSTS for web servers, and STARTTLS+SPF+DKIM+DMARC for email.
It’s … pretty decent for a brief, public requirements doc. It’s compatible with a prudent rollout of email authentication.
- Set up a centralized reporting repository for DMARC failure and aggregate reports.
- Within 90 days, turn on opportunistic TLS, deploy SPF records, deploy DKIM and set up DMARC with p=none and an email address for reporting.
- Within 120 days, disable weak TLS ciphers.
- Within one year, migrate to p=reject.
The TLS requirements are sensible, and should be easy enough to roll out – and there’s likely enough time to work with vendors when it inevitably turns out that some servers can’t comply.
Best, it allows for a period of up to nine months of sending email with DMARC in monitoring-only mode with p=none. That, combined with a centralized repository for DMARC reports means that they should have enough visibility into issues to be able to resolve them before migrating to p=reject.
It all suggests a more realistic approach to DMARC timescales and issue monitoring during rollout than many organizations have shown.
They also have one of the clearer layman introductions to email authentication I’ve seen at https://cyber.dhs.gov/intro/.
Much of the content is well worth borrowing if you’re planning your own authentication upgrades; it’s all released CC0 / public domain (and the markdown source is at github).