Continuous Testing

C

HubSpot recently posted an blog article comparing which was better for engagement, plain text emails or HTML emails. In a survey they sent out in 2014, 64% of the responses said they preferred the HTML and image-based emails. It seems pretty straight forward, recipients say they want HTML emails over text based emails but through their A/B testing, the text versions had a higher open rate.
They also reported:

  • Adding GIFs decreased the opens by 37%
  • HTML template lowered opens by 25%
  • Heavy HTML with images lowered open rates by 23%

HubSpot tested the theory over 10 mailings then looked at the click through rates. As the number of images increased, the number of clicks decreased.
What HubSpot’s results tells me is that senders may be missing out on engagement by not identifying what their recipients want.  Testing is a critical aspect of email marketing by continuously looking at how to send the type of content your recipients are wanting. Many ESPs have built-in support for automated split A/B testing.
There are many ways to compare what works best for your recipients including:

  • Testing various subject lines
  • Changing PreHeader text
  • Relocating and adjusting the colors of your call to actions
  • Providing the option to receive either HTML or Text based emails
  • Adjusting the send time

There are many more options for A/B  testing.  Sending engaging emails is a top priority for email marketers and senders should continuously test to discover what works best for their recipients.
 

About the author

1 comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  • A couple years ago when I was between gigs, a friend needed some help with a network move at a big ad agency downtown. One of the things that really struck me, as a transport-layer guy, was the scope and complexity of the testing they did: subjects, color schemes, layout, graphics, content, call to action – every single thing in the creatives at that ad agency was as rigorously tested as their network infrastructure was. Every send was another test that refined what followed.
    I don’t know why “quantify the effectiveness of art” was something that didn’t occur to me before then, but now I assume that it’s something that competent senders are doing.

By josh

Recent Posts

Archives

Follow Us