Ad-hoc analysis

I often pull emails into a database to analyze them, but sometimes I want something simpler. Emails are typically stored in one of two ways: mbox format, where an entire mailbox is stored in a single file, and maildir format, where a mailbox is a directory with one file in it for each email.
My desktop mail application is Mail.app on OS X, and it stores messages in a maildir-ish format, so I’m going to work with that here. If you’re using mbox format mailboxes it’s a little trickier (but you can use a tool called formmail to split an mbox style format into a maildir directory and go from there).
I want to gather some statistics on mail I’ve sent to abuse desks, so the first thing I do is open up a terminal window and change directory to where my “Sent Messages” mailbox is:
cd Library/Mail/V2/IMAP-steve@misc.wordtothewise.com/Sent Messages.mbox
(Tab completion is really useful for navigating through the mailbox hierarchy.)
Then I need to go through every email (file) in that directory, for each file find the “To:” header and check to see if it was sent to an abuse desk. If it was sent to an abuse desk I want to find the email address for each one, count how many times I see that email address and find the top twenty or so abuse desks I send reports to. I can do all that with a single command line:
find . -type f -exec egrep -m1 '^To:' {} ; | egrep -o 'abuse@[a-zA-Z0-9._-]+' | sort | uniq -c | sort -nr | head -20
(Enter that all as a single line, even though it’s wrapped into two here).
That’s a bit much to understand all at once, so lets redo that in several stages, with an intermediate file so we can see what’s going on.
find . -type f -exec egrep -m1 '^To:' {} ; >tolines.txt
The find command finds all the files in a directory and does something with them. In this case we start looking in the current directory (“.”), look just for files (“-type f”) and for each file we find we run that file through another command (“-exec egrep -m1 ‘^To:’ {} ;”) and write the result of that command to a file (“>tolines.txt”). The egrep command we run for each file goes through the file and prints out the first (“-m1”) line it finds that begins with “To:” (“‘^To:'”). If you run that and take a look at the file it creates you can see one line for each message, containing the “To:” header (or at least the first line of it).
The next thing to do is to go through that and pull out just the email addresses – and just the ones that are sent to abuse desks:
egrep -o 'abuse@[a-zA-Z0-9._-]+' tolines.txt
This uses egrep a second time, this time to look for lines that look like an email address (“‘abuse@[a-zA-Z0-9._-]+'”) and when it finds one print out just the part of the line that matched the pattern (“-o”).
Running that gives us one line of output for each email we’re interested in, containing the address it was sent to. Next we want to count how many times we see each one. There’s a command line idiom for that:
egrep -o 'abuse@[a-zA-Z0-9._-]+' tolines.txt | sort | uniq -c
This takes all the lines and sorts (“sort”, reasonably enough) them – so that identical lines will be next to each other – then counts runs of identical lines (“uniq -c”). We’re nearly there – the result of this is a count and an email address on each line. We just need to find the top 20:
egrep -o 'abuse@[a-zA-Z0-9._-]+' tolines.txt | sort | uniq -c | sort -nr | head -20
Each line begins with the count, so we can use sort again, this time telling it to sort by number, high to low (“sort -nr”). Finally, “head -20” will print just the first 20 lines of the result.
The final result is this:

35 abuse@hostnoc.net
34 abuse@pacnet.com
32 abuse@theplanet.com
28 abuse@google.com
27 abuse@yahoo.com
21 abuse@rackspace.com
18 abuse@singlehop.com
18 abuse@rr.com
18 abuse@comcast.net
17 abuse@he.net
17 abuse@gmail.com
15 abuse@colocrossing.com
13 abuse@ntt.net
13 abuse@bit.ly
12 abuse@softlayer.com
12 abuse@aol.com
11 abuse@sendgrid.com
11 abuse@crystone.se
11 abuse@1and1.com
10 abuse@godaddy.com

It’s not just useful for mailboxes – you can use some of the same approaches to pull data and summary statistics out of raw log files too (“What different 4xx responses has this MTA seen recently?”).

Related Posts

Defending against the hackers of 1995

Passwords are convenient for the end user, but it’s too easy to lose control of them. People share them with other people. People write them down, where they can be read. People send them in email, and that email is easily intercepted. People’s web browsers store the passwords, so they can log in automatically. Worst of all, perhaps, people tend to use the same username and password at many different websites. If just one of those websites is compromised (or even run as a password collecting scam) then those passwords can be used to attack accounts at all of the others.
Two factor authentication that uses an uncopyable physical device (such as a cellphone or a security token) as a second factor mitigates most of these threats very effectively. Weaker two factor authentication using digital certificates is a little easier to misuse (as the user can share the certificate with others, or have it copied without them noticing) but still a lot better than a password.
Security problems solved, then?

Read More

Weird mail problems today? Clear your DNS cache!

A number of sources are reporting this morning that there was a problem with some domains in the .com zone yesterday. These problems caused the DNS records of these domains to become corrupted. The records are now fixed. Some of the domains, however, had long TTLs. If a recursive resolver pulled the corrupted records, it could take up to 2 days for the new records to naturally age out.
Folks can fix this by flushing their DNS cache, thus forcing the recursive resolver to pull the uncorrupted records.
EDIT: Cisco has published some more information about the problem. ‘Hijacking’ of DNS Records from Network Solutions

Read More

You can't technical yourself out of delivery problems

In many cases these days, many more cases than a lot of senders want to admit, delivery problems at the big ISPs are a result of sending mail recipients just don’t care about. The reason your mail is going to bulk? It’s not because you have minor problems in your headers. It’s not because you have some formatting issues. The reason is because your recipients just don’t care if the ISP delivers your mail or not.
A few years ago the bulk of my clients hired me to do technical audits for their mail. I fixed a lot of delivery problems that way. They’d send me their email and I’d run it through tools here and identify things they were doing that were likely to be causing problems. I’d give them some suggestions of things to change. Believe it or not, minor tweaks to headers and configuration actually did make a lot of difference in delivery.
Over time, though those tweaks less effective to fix delivery problems. Some of it is due to the MTA vendors, they’re a lot better at sending technically correct mail than they were before. There are also a lot more people giving good advice on the underlying structure and format of emails so senders can send technically clean email. I started seeing technically perfect emails from clients who were seeing major delivery problems.
There are a number of reasons that technical fixes don’t work like they used to. The short version, though, is that ISPs have dealt with much of the really blatant spam and they can focus more time and energy on the “grey mail”.
This makes my job a little harder. I can no longer just look at an email, maybe run it through some of our tools and provide a few suggestions that fix delivery problems. Delivery isn’t that simple any longer. Filters are really more focused on how the recipients react to mail. That means I need to know a lot more about a clients email program before I can even start to identify what might be causing the delivery issues.
I wish it were still so simple I could give minor technical tweaks that would appear to magically improve a client’s delivery. It was a lot simpler process then. But filters have evolved, and senders must evolve, too.

Read More