Testing and data driven decisions

T

There’s a lot of my education in the sciences that focused on how to get a statistically accurate sample. There’s a lot of math involved to pick the right sample size. Then there’s an equal amount of math involved to figure out the right statistical tests to analyse the data. One of the lessons of grad school was: the university has statistics experts, use them when designing studies.

Outline of a head with a gear inside it.

Even in science not everything we test has to be statistically accurate. Sometimes we just want to get an idea if there is something here. Is there a difference between doing X and doing Y? Let’s do a couple pilot tests and see what happens. Is this a line of inquiry worth pursuing?

Much of my statistical knowledge comes from practice, not theory. Most of my advanced classes did have some stats, but I never actually took a statistics class. That leaves me in a strange position when listening to people talking about the testing they do. I know enough statistics to question whether their results are valid and meaningful. But I don’t know enough theory to actually dig down into the numbers and explain why.

In marketing, we do a lot of testing. We use the results of this testing to drive decisions. We call this data driven marketing. I know a lot of marketing departments and agencies do have statisticians and data scientists on hand.

I am sure, though, that some tests are poorly designed and incorrectly analysed. This bad data leads to poor decision making that leads to inconsistent or unexpected results. The biggest problem is, people who fail to go back and question if the data used to make the decision means what they think it does.

Email, and particularly filters, have a lot of non-repeatable elements. Gmail filters, for instance, adapt constantly. Without carefully constructed, controlled and repeated tests we’re never going to be able to tease out the specifics. The even bigger challenge is that the process of testing will, in and of itself, change the results. Run the same series of tests over and over again and the filters may adapt and act differently for test 11 than test 2.

Another piece that leads to poor decision making is thinking our preferences are representative of our audience. Even unconsciously, many of us design marketing programs that fit the way we like to be marketed to. In order to make good decisions, we need to question our own biases and think about what our audience wants.

Finally, there is a lot of value in looking at how people behave. One thing I’ve heard a lot from marketers over the years is that what people say they want is different from how they actually act.

Overall, to make good marketing decisions we can’t just collect random bits of data and use it to justify what we wanted to do anyway. The data always reflects the question we asked, but not always the question we wanted the answer to. Blindly using data, without thinking about our own biases, leads to poor outcomes.

About the author

2 comments

Leave a Reply to Andrew Kugler

This site uses Akismet to reduce spam. Learn how your comment data is processed.

  • I think you have come across what I have thought was one of the hardest things to deal with in deliverability- data driven decision making based on reliable repeatable statistically sound data. I do have the theory and statistics knowledge to confirm your thoughts in that much of what I have seen from even “leaders” in marketing is based on less than solid statistical grounds. There are too many factors, and inferences are drawn from small and inconclusive data sets. Or inferences are garnered from large overarching data-sets that don’t lend themselves to small bits of knowledge that can be shown to be true.
    What I have learned through my trek in deliverability though, is that experience of those who keep close to trends and try to base things on data as they can generally have a good intuition about what is affecting a specific client or email sender.

By laura

Recent Posts

Archives

Follow Us