Next Delivery Discussion Wednesday, May 20. We’ll be talking FBLs.
I’ve been reviewing the recording of last week’s call. A few folks have reached out and asked that their comments not be shared, so I am working out next steps. The good news is that the recording worked well and I’m learning new skills.
Please RSVP to laura-ddiscuss at the obvious domain. Invite will go out early next week with the details and the link to notes and a Google doc.
If you have any questions or topics you’d like addressed, let me know.
We had a well attended call yesterday, almost 40 people showed up. I did get a recording but need to work out some editing before sharing it. What did you do during the pandemic? I learned lots of new things and spent way to much time relearning all the virology and immunology I forgot after leaving the lab…
I do have the google doc of resources and information I put together prior to the call, as well as some notes taken by helpful participants.
Next call is Wednesday May 20, same time. Topic to be announced.
(and, again, I wrote this and forgot to hit send)
Our next Delivery Discussion is May 6 at 5pm Irish time, noon eastern and 9am pacific. We’ll be talking about spamtraps. Drop me an email at laura-ddiscuss@ the obvious domain to get an invite.
Bring your questions, your concerns and, yes, even your gripes to talk with various folks in the industry. We’ll share what we know, what we think and what we feel about spamtraps.
I’ll be pulling together some resources and will share them here after the call.
Can’t wait to speak with you.
A few weeks ago, Kickbox asked me, and a bunch of other folks who know their stuff, what was missing in MarTech. Yesterday they published what we thought. Check out their blog post and see what folks had to say.
Email Deliverability Unfiltered: What’s missing in MarTech for deliverability?
The deliverability discussion calls are going well and I’m going continue to host them on a biweekly basis. Next call will be May 6th, 5pm Ireland time, noon Eastern and 9am Pacific time. Still doing invites manually, so drop me an email at laura-ddiscuss@ the obvious domain.
A few weeks ago, the discussion entered around machine learning in general. As a follow on I’d planned to talk about was how different ML filters are trained.
Almost all filters out there are based on machine learning. The commercial B2B filters, the filters at consumer mailbox providers, all filters have some components based on ML. But there’s a big difference in how those filters are trained and the data they have access to. Gmail, VMG and Microsoft all have access to the the mail client as well as the mail server. They can, and do, track user activity with mail as part of how they teach the engines.
Filtering appliances don’t have the same inputs as the mailbox providers do. They don’t have access to the mail client. That ML is not based on user interaction or engagement at all.
We did talk a little bit about that, and what folks’ experiences were, but then the conversation wandered. It was a good wander. We talked a lot about how we described filtering and filters and delays to people. I described a cake analogy a client shared with me. Another person described filters as tomato sauce.
The cake analogy: making changes at Gmail in particular is like baking a cake. You put all the ingredients together, mix them well and put the cake in the oven. Then you have to wait for it to bake. If you keep poking it, or opening the oven then you just make the cake worse. When you’re trying to fix delivery problems at Gmail, you need to make the changes and then just wait for the filters to catch up.
The tomato sauce analogy: (any errors in transcription are mine) A company wants to make tomato sauce. They want to make the best tomato sauce there is. So they make one kind of tomato sauce. But different people want different kinds of tomato sauce. Some people want chunky sauce, some want smooth sauce, some want really garlicky sauce, some want meaty sauce. A successful company makes all kinds of tomato sauce to meet the needs of different kinds of customers.
We also talked about how the size of the sender does matter. Smaller senders and larger senders are treated differently by the filters. What works when you’re small doesn’t always work when you’re big. And, what works when you’re big doesn’t always translate down to smaller senders.
It was a fun call. Afterwards I got a message from a participant saying they really enjoyed it and found it “fascinating how some scenarios can be so nuanced especially between smaller and larger senders and transactional versus promotional. There has been so much shared from everyone and the machine learning was really helpful as I definitely heard new information.”
Start those emails coming for the next call. Can’t wait to talk again.
Next deliverability discussion will be Wednesday April 22 at 5pm Ireland, Noon eastern, 9am pacific. As always, drop me a mail at laura-ddiscuss@ the obvious domain.
I’m still thinking about the final topic. One of my ideas is a continuation of the machine learning discussion from last time. I think most large scale spam filters use ML for some parts of their filtering engine these days. But how the engines are taught and the feedback for training varies. A filter aimed at the corporate market isn’t going to have the level of engagement data a Gmail or VMG has.
How filters learn drives how we can and should react to delivery problems. Many filters provide support channels but not all of them do. But a better understanding of the filters will inform how we deal with delivery problems.
Look forward to seeing you there.
Thanks to everyone who joined the deliverability discussion on Friday. I realised after I scheduled that it was Good Friday and that may have limited some folks’ ability to join.
This call we talked about Machine Learning and how it is applied to spam filters. Before the call I put together some resources in a Google Doc and some participants added links and more information afterwards. I’ve shut down editing but if you have other resources to add, drop them in the comments here and I’ll add them.
A sincere thanks to everyone who joined. I’m finding these calls to be a good way to connect with folks in the industry and learn new things. I plan on continuing every other week.
I’ve been rotating days in order to give more folks a chance to join. I anticipate the next call will be Tuesday the 21 of April, morning in the US, evening here in Ireland. Time and topic will be announced later this week. If you have any requests for topics drop them in the comments.
Sometime in the last few days,
Spamhaus seems to have started issuing a block message if someone queries the DBL with an IP address. folks started seeing an uptick in error messages that mention Spamhaus saying:
554 5.7.1 Service unavailable; Client host [x.x.x.x] blocked using dbl.spamhaus.org; No IP queries, see https://www.spamhaus.org/faq/section/Spamhaus%20DBL#279 (in reply to RCPT TO command))
Crowdsourcing information from the emailgeeks slack channel makes it look like sometime in the last 2 or 3 days
Spamhaus started replying positively to any query of the DBL that was an IP address changed their DBL configuration are querying the list incorrectly. Spamhaus has always returned this code to senders querying the DBL with an IP address.
Basically, anyone who is getting this error message can do nothing about it. This is a configuration error on the receiver’s side. If it continues for much longer, I’d mark the domains as inactive and do not mail. Clearly no one at the domain is home and haven’t noticed they’ve not received any email in days.
Many thanks to Spamhaus for contacting me and setting me straight that this was nothing new and the DBL has always returned 127.0.1.255 to an IP query directed at the DBL.
A few weeks ago I was on an industry call. We were discussing some changes coming down the pike at the ISPs and filter providers. These changes are going to cause some headache at ESPs and other places that do email but don’t provide mailboxes. During the call I ended up explaining why what the ISPs were doing made sense and how it fit in with their mission and customer needs.
At one point someone asked me “whose side are you on, anyway?” That made me think pretty hard. My first reaction was “there are no sides here, we all want the same things – recipients to get the mail they want.” But that’s not what a lot of companies in the industry want. Many of them don’t really think about the email ecosystem and how individual choices affect it.
I realised, though, that I do have a side. I’m on the side of the end user who just wants the email that they want and who doesn’t want email they don’t want. But there’s a little more to it than that. It comes down to some fundamental beliefs I have about email.
Here are a few of the things I believe:
- Users expect email to be safe.
- Mail providers have a responsibility to protect users from unsafe email
- It is better for filters to be more aggressive when it comes to safety.
- Users are possessive of their inboxes and treat it as ‘their’ space.
- Users want to receive email that they want.
- Users don’t want mail that they consider spam.
- Mailbox providers want to serve their users.
Each one of these beliefs affects how I approach deliverability and troubleshooting. They also each deserve discussion about how they affect deliverability. I’ll use this as a road map for future blog posts.
Given the success of our initial call, let’s try it again. This time Friday April 10, 5pm Ireland time, 12 noon Eastern and 9am Pacific. Same as before, send me an email to laura-ddiscuss@ the obvious domain and I’ll send you an invite. I’m trying to move the days around to catch folks who couldn’t make Wednesday.
- How do you learn the technical aspects of delivery?
- Ongoing changes at Microsoft and how folks are adapting.
- What tools are folks using to monitor deliverability?
I’ve been looking at various conferencing software for some projects I’m working on for later in the year. I’m going to use these calls to try out some different video conferencing software. Thank you for being my stress testers.