Ad-hoc analysis

I often pull emails into a database to analyze them, but sometimes I want something simpler. Emails are typically stored in one of two ways: mbox format, where an entire mailbox is stored in a single file, and maildir format, where a mailbox is a directory with one file in it for each email.
My desktop mail application is Mail.app on OS X, and it stores messages in a maildir-ish format, so I’m going to work with that here. If you’re using mbox format mailboxes it’s a little trickier (but you can use a tool called formmail to split an mbox style format into a maildir directory and go from there).
I want to gather some statistics on mail I’ve sent to abuse desks, so the first thing I do is open up a terminal window and change directory to where my “Sent Messages” mailbox is:
cd Library/Mail/V2/IMAP-steve@misc.wordtothewise.com/Sent Messages.mbox
(Tab completion is really useful for navigating through the mailbox hierarchy.)
Then I need to go through every email (file) in that directory, for each file find the “To:” header and check to see if it was sent to an abuse desk. If it was sent to an abuse desk I want to find the email address for each one, count how many times I see that email address and find the top twenty or so abuse desks I send reports to. I can do all that with a single command line:
find . -type f -exec egrep -m1 '^To:' {} ; | egrep -o 'abuse@[a-zA-Z0-9._-]+' | sort | uniq -c | sort -nr | head -20
(Enter that all as a single line, even though it’s wrapped into two here).
That’s a bit much to understand all at once, so lets redo that in several stages, with an intermediate file so we can see what’s going on.
find . -type f -exec egrep -m1 '^To:' {} ; >tolines.txt
The find command finds all the files in a directory and does something with them. In this case we start looking in the current directory (“.”), look just for files (“-type f”) and for each file we find we run that file through another command (“-exec egrep -m1 ‘^To:’ {} ;”) and write the result of that command to a file (“>tolines.txt”). The egrep command we run for each file goes through the file and prints out the first (“-m1”) line it finds that begins with “To:” (“‘^To:'”). If you run that and take a look at the file it creates you can see one line for each message, containing the “To:” header (or at least the first line of it).
The next thing to do is to go through that and pull out just the email addresses – and just the ones that are sent to abuse desks:
egrep -o 'abuse@[a-zA-Z0-9._-]+' tolines.txt
This uses egrep a second time, this time to look for lines that look like an email address (“‘abuse@[a-zA-Z0-9._-]+'”) and when it finds one print out just the part of the line that matched the pattern (“-o”).
Running that gives us one line of output for each email we’re interested in, containing the address it was sent to. Next we want to count how many times we see each one. There’s a command line idiom for that:
egrep -o 'abuse@[a-zA-Z0-9._-]+' tolines.txt | sort | uniq -c
This takes all the lines and sorts (“sort”, reasonably enough) them – so that identical lines will be next to each other – then counts runs of identical lines (“uniq -c”). We’re nearly there – the result of this is a count and an email address on each line. We just need to find the top 20:
egrep -o 'abuse@[a-zA-Z0-9._-]+' tolines.txt | sort | uniq -c | sort -nr | head -20
Each line begins with the count, so we can use sort again, this time telling it to sort by number, high to low (“sort -nr”). Finally, “head -20” will print just the first 20 lines of the result.
The final result is this:

35 abuse@hostnoc.net
34 abuse@pacnet.com
32 abuse@theplanet.com
28 abuse@google.com
27 abuse@yahoo.com
21 abuse@rackspace.com
18 abuse@singlehop.com
18 abuse@rr.com
18 abuse@comcast.net
17 abuse@he.net
17 abuse@gmail.com
15 abuse@colocrossing.com
13 abuse@ntt.net
13 abuse@bit.ly
12 abuse@softlayer.com
12 abuse@aol.com
11 abuse@sendgrid.com
11 abuse@crystone.se
11 abuse@1and1.com
10 abuse@godaddy.com

It’s not just useful for mailboxes – you can use some of the same approaches to pull data and summary statistics out of raw log files too (“What different 4xx responses has this MTA seen recently?”).

Related Posts

More on the attack against Spamhaus and how you can help

While much of the attack against Spamhaus has been mitigated and their services and websites are currently up, the attack is still ongoing.  This is the biggest denial of service attack in history, with as much as 300 gigabits per second hitting Spamhaus servers and their upstream links.
This traffic is so massive, that it’s actually affecting the Internet and web surfers in some parts of the world are seeing network slowdown because of this.
While I know that some of you may be cheering at the idea that Spamhaus is “paying” for their actions, this does not put you on the side of the good. Spamhaus’ actions are legal. The actions of the attackers are clearly illegal. Not only is the attack itself illegal, but many of the sites hosted by the purported source of the attacks provide criminal services.
By cheering for and supporting the attackers, you are supporting criminals.
Anyone who thinks that an appropriate response to a Spamhaus listing is an attack on the very structure of the Internet is one of the bad guys.
You can help, though. This attack is due to open DNS resolvers which are reflecting and amplifying traffic from the attackers. Talk to your IT group. Make sure your resolvers aren’t open and if they are, get them closed. The Open Resolver Project published its list of open resolvers in an effort to shut them down.
Here are some resources for the technical folks.
Open Resolver Project
Closing your resolver by Team Cymru
BCP 38 from the IETF
Ratelimiting DNS
News Articles (some linked above, some coming out after I posted this)
NY Times
BBC News
Cloudflare update
Spamhaus dDOS grows to Internet Threatening Size
Cyber-attack on Spamhaus slows down the internet
Cyberattack on anti-spam group Spamhaus has ripple effects
Biggest DDoS Attack Ever Hits Internet
Spamhaus accuses Cyberbunker of massive cyberattack

Read More

Defending against the hackers of 1995

Passwords are convenient for the end user, but it’s too easy to lose control of them. People share them with other people. People write them down, where they can be read. People send them in email, and that email is easily intercepted. People’s web browsers store the passwords, so they can log in automatically. Worst of all, perhaps, people tend to use the same username and password at many different websites. If just one of those websites is compromised (or even run as a password collecting scam) then those passwords can be used to attack accounts at all of the others.
Two factor authentication that uses an uncopyable physical device (such as a cellphone or a security token) as a second factor mitigates most of these threats very effectively. Weaker two factor authentication using digital certificates is a little easier to misuse (as the user can share the certificate with others, or have it copied without them noticing) but still a lot better than a password.
Security problems solved, then?

Read More

Weird mail problems today? Clear your DNS cache!

A number of sources are reporting this morning that there was a problem with some domains in the .com zone yesterday. These problems caused the DNS records of these domains to become corrupted. The records are now fixed. Some of the domains, however, had long TTLs. If a recursive resolver pulled the corrupted records, it could take up to 2 days for the new records to naturally age out.
Folks can fix this by flushing their DNS cache, thus forcing the recursive resolver to pull the uncorrupted records.
EDIT: Cisco has published some more information about the problem. ‘Hijacking’ of DNS Records from Network Solutions

Read More