SNDS News

A number of people have mentioned over the last week or so that they’re seeing a lot of outages, failures and general ickiness with SNDS. I contacted Microsoft and asked about it. SNDS has been undergoing some upgrades and improvements and the outages were not intended to be end user visible. They’re going to keep a closer eye on things, while they finish the upgrades.
The good news in all of this is that SNDS is being upgraded and maintained. SNDS is still a functioning part of the Microsoft infrastructure, and this is good news for anyone who uses it as a data source.

Related Posts

Still futile

As I mentioned last Thursday, both Yahoo and Microsoft filed oppositions to Holomaxx’s opposition to dismissal. Let me ‘splain… no, there is too much, let me sum up.
Holomaxx sued both Microsoft and Yahoo to force MS and Yahoo to stop blocking mail from Holomaxx.
The judge dismissed the initial complaint with leave to amend.
Holomaxx filed a first amended complaint.
Microsoft and Yahoo both argued that the first amendment complaint should be dismissed because it wasn’t fixed.
Holomaxx filed a motion in opposition to the motion to dismiss. Their arguments were reasonably simple.

Read More

DNS, SERVFAIL, firewalls and Microsoft

When you look up a host name, a mailserver or anything else there are three types of reply you can get. The way they’re described varies from tool to tool, but they’re most commonly referred to using the messages dig returns – NXDOMAIN, NOERROR and SERVFAIL.
NXDOMAIN is the simplest – it means that there’s no DNS record that matches your query (or any other query for the same host name).
NOERROR is usually what you’re hoping for – it means that there is a DNS record with the host name you asked about. There might be an exact match for your query, or there might not, you’ll need to look at the answer section of the response to see. For example, if you do “dig www.google.com MX” you’ll get a NOERROR response – because there is an A record for that hostname, but no answers because there’s no MX record for it.
SERVFAIL is the all purpose “something went wrong” response. By far the most common cause for it is that there’s something broken or misconfigured with the authoritative DNS for the domain you’re querying so that your local DNS server sends out questions and never gets any answers back. After a few seconds of no responses it’ll give up and return this error.
Microsoft
Over the past few weeks we’ve heard from a few people about significant amounts of delivery failures to domains hosted by Microsoft’s live.com / outlook.com, due to SERVFAIL DNS errors. But other people saw no issues – and even the senders whose mail was bouncing could resolve the domains when they queried Microsofts nameservers directly rather than via their local DNS resolvers. What’s going on?
A common cause for DNS failures is inconsistent data in the DNS resolution tree for the target domain. There are tools that can mechanically check for that, though, and they showed no issues with the problematic domains. So it’s not that.
Source ports and destination ports
If you’re even slightly familiar with the Internet you’ve heard of ports – they’re the numbered slots that servers listen on to provide services. Webservers listen on port 80, mailservers on port 25, DNS servers on port 53 and so on. But those are just the destination ports – each connection comes from a source port too (it’s the combination of source port and destination port that lets two communicating computers keep track of what data should go where).
Source ports are usually assigned to each connection pretty much randomly, and you don’t need to worry about them. But DNS has a history of the source port being relevant (it used to always use source port 53, but most servers have switched to using random source ports for security reasons). And there’s been an increasing amount of publicity about using DNS servers as packet amplifiers recently, with people being encouraged to lock them down. Did somebody tweak a firewall and break something?
Both source and destination ports range between 1 and 65535. There’s no technical distinction between them, just a common understanding that certain ports are expected to be used for particular services. Historically they’ve been divided into three ranges – 1 to 1023 are the “low ports” or “well known ports”, 1024-49151 are “registered ports” and 49152 and up are “ephemeral ports”. On some operating systems normal users are prevented from using ports less than 1024, so they’re sometimes treated differently by firewall configurations.
While source ports are usually generated randomly, some tools let you assign them by hand, including dig. Adding the flag -b "0.0.0.0#1337" to dig will make it send queries from  source port 1337. For ports below 1024 you need to run dig as root, but that’s easy enough to do.
A (slightly) broken firewall
sudo dig -b "0.0.0.0#1024" live.com @ns2.msft.net” queries one of Microsofts nameservers for their live.com domain, and returns a good answer.
sudo dig -b "0.0.0.0#1023" live.com @ns2.msft.net” times out. Trying other ports above and below 1024 at random gives similar results. So there’s a firewall or other packet filter somewhere that’s discarding either the queries coming from low ports or the replies going back to those low ports.
Older DNS servers always use port 53 as their source port – blocking that would have caused a lot of complaints.
But “sudo dig -b "0.0.0.0#53" live.com @ns2.msft.net” works perfectly. So the firewall, wherever it is, seems to block DNS queries from all low ports, except port 53. It’s definitely a DNS aware configuration.
DNS packets go through a lot of servers and routers and firewalls between me and Microsoft, though, so it’s possible it could be some sort of problem with my packet filters or firewall. Better to check.
sudo dig -b "0.0.0.0#1000" google.com @ns1.google.com” works perfectly.
So does “sudo dig -b "0.0.0.0#1000" amazon.com @pdns1.ultradns.net“.
And “sudo dig -b "0.0.0.0#1000" yahoo.com @ns1.yahoo.com“.
The problem isn’t at my end of the connection, it’s near Microsoft.
Is this a firewall misconfiguration at Microsoft? Or should DNS queries not be coming from low ports (other than 53)? My take on it is that it’s the former – DNS servers are well within spec to use randomly assigned source ports, including ports below 1024, and discarding those queries is broken behaviour.
But using low source ports (other than 53) isn’t something most DNS servers will tend to do, as they’re hosted on unix and using those low ports on unix requires jumping through many more programming hoops and involves more security concerns than just limiting yourself to ports above 1023. There’s no real standard for DNS source port randomization, which is something that was added to many servers in a bit of a hurry in response to a vulnerability that was heavily publicized in 2008. Bind running on Windows seems to use low ports in some configurations. And even unix hosted nameservers behind a NAT might have their queries rewritten to use low source ports. So discarding DNS queries from low ports is one of the more annoying sorts of network bugs – one that won’t affect most people at all, but those it does affect will see it much of the time.
If you’re seeing DNS issues resolving Microsoft hosted domains, or you’re seeing patterns of unexpected SERVFAILs from other nameservers, check to see if they’re blocking queries from low ports. If they are, take a look and see what ranges of source ports your recursive DNS resolvers are configured to use.
(There’s been some discussion of this recently on the [mailop] mailing list.)

Read More

Storms, outages and email

There’s been quite a bit of discussion about how Hurricane (Superstorm?) Sandy has affected email delivery over the last week. There are a couple things that may affect delivery at a number of domains.
Receiving mailservers hosted in facilities that lost power or connectivity for one reason or another. Most of these issues seem to be resolved now, although a number of places are still on generator power. There are also a number of facilities where employees and customers went above and beyond the call of duty to keep those facilities running. Peer1 got a lot of press for their bucket brigade, but they’re not the only company that kept running despite power outages, flooding and horrible conditions.
Routing hardware went down in a number of places. Again, mostly because of the power outages. Router failures can mean that some mail can’t get from A to B, even if both A and B are up and functioning. As with the servers, these problems seem mostly under control.
Recipients don’t have power or internet at home. In fact, I think this is one of the bigger marketing challenges. Recipients can’t get their mail because they don’t have power or internet. This is probably going to have a bit of a longer term affect on email. Even when folks get their email back, the latest sale email from their favorite vendor isn’t necessarily going to be what they are looking for in their inbox. Even if they are looking for that sale email, they’re going to have a mailbox with days worth of email to sort through.
None of this is a long term problem. It’s mostly temporary. But marketers can expect lower open and click rates during the storm cleanup and restoration phase.

Read More