3 Traps of data-driven marketing

Part of me can’t believe I’m writing this article. I’m a staunch supporter of the “automate everything” and the “objective thinking at all costs” models of life and work. Spending years in physics, I hold a firm reductionist point of view on nearly everything, always asserting that, “to know the fundamental principles of a system is to know the entire system.” The problem is, the systems I’ve worked on in the past did not involve a human element. Hard to predict and a constant noise generator, pesky humans always muddy up the process — enter marketing.

“Data-driven marketing” has become a redundant mouthful. Data prevails as the main driving force behind all marketing campaigns, so it’s hard to imagine a scenario where decisions are made based on anything else. Sure, marketing has always been driven by “data”, but the variety and volume that are available elevate new solutions. More data leads to higher certainty so we can reveal hidden patterns, right? This is all true, but let’s avoid falling into the traps of blind and careless data-driven thinking.

No matter where it comes from, a common feature of all data is noise. No data set is 100% accurate and relevant. This is not a hot take. However, it’s easy to forget and to assume all the bells and whistles developed by data scientists take care of everything. It’s also easy to conclude we have sufficient data to ignore the noise and bury it, thanks to the law of large numbers. Sure, statistics tells us that as long as we know most of the data is credible and there is enough of it, we won’t get smacked by our assumptions later on.

Here are three sticking points to watch out for when you mobilize a new marketing campaign that will surely glisten with the latest data-driven ingredients.

Trap #1. My margin of error is different than yours

Every campaign is different. Some can tolerate a generous amount of “that’s close enough.” Others demand considerable specificity at the expense of modest volume. The beauty of optimization algorithms is that they steer us towards a solution that we may not find without it. They hammer out patterns by inspecting sets of inputs and outputs to derive the optimal input. The key considerations are how precisely we define the optimal output and what bounds we place on the allowed inputs.

  • Scenario: We want to increase engagement with our email marketing campaigns. We hypothesize that optimizing the image content to appeal to IT executives might do the trick.
  • Data science in action: We generate a range of images with labeled attributes (funny, serious, rambunctious, revolutionary, etc…). We start sending emails while slightly changing the character of the images every time. Send an email, measure the result. Send an email, measure a result. So on and so forth.
    Over time, our algorithm tells us to send content that resembles a series of Dilbert cartoons. What? Turns out IT executives think Dilbert cartoons are hilarious and they spend time on our emails with funny images. Maybe we can make this work. However, our brand is not exactly being ideally represented if it’s viewed as a cartoon publication rather than a cyber security service provider. This is my two cents — others may have a different opinion about this specific example.
  • The trap: The algorithm didn’t have bounds on the input. It found the correct solution and delivered the desired results as we defined it. More engagement for our email campaign at the expense of a weaker message. The human element of IT executives giggling at Dilbert overshadowed our business objective.

Trap #2. Sometimes you can’t learn from the past

“Study the past, if you would divine the future” — Confucious

That’s good advice for general life. However, humans can intuit when two experiences are compatible, and when they are not. If we assume two marketing campaigns are compatible, then so will the machine that is making predictions. We can learn a lot from work we completed, but we must carefully consider the prior assumptions in all cases and how business objectives evolve.

  • Scenario: We want to determine the optimal time (between 8am and 10pm) to Tweet company news to maximize retweets among our target audience of middle-aged professionals.
  • Data science in action: We established clear bounds on the input and a precise output objective. Trap #1 successfully sidestepped (as well as we can). Wait! Instead of “Tweet, measure the result. Tweet, measure the result. So on and so forth,” let’s use the results from our experiment last week when we identified the optimal time to engage with middle-aged Marketing professionals!
  • The trap: This certainly may work. However, the objectives of the two experiments are slightly different. The new objective targets all middle-aged professionals. The result from last week targets only Marketing professionals. It may be true that Marketing professionals offer a fair representation of the overall population, but they may also deviate from the mean. We are rolling the dice unless we test the exact scenario that matches our objective! Learn from past results, but consider the parameters carefully. The best approach is to use past results as a starting point for a new exploration.

Trap #3. All data are not created equal

Most data generated from humans is a mess. Psychology, economics, political science — ugh, polling — all have something in common: Experts try to make predictions and they are often wrong. Well, they are usually “not wrong” within the very narrow bounds that they define. Consensus is hard to come by in the social sciences. Data collection is difficult, complete representations of a population is unrealistic, and human emotions and subjective inclinations are often masked.

Dealing with noisy and incompatible data is a matter of fact in marketing as well. Managing innumerable marketing channels inevitably leads to many data channels which inevitably leads to incompatible data sets with different assumptions, different sources of noise, and different fundamental attributes. Consider the variation between a LinkedIn audience and a Twitter audience. We would love to distill overarching insights from the largest possible audience, but controlling for the disparate sources of data is a must. Unique assumptions must be made for each marketing channel when extracting overarching insights.

In Summary

Data-driven marketing techniques are invaluable. I engage in marketing automation and data analysis every single day. There really is no other way to do it. However, avoiding these 3 traps requires a deviation from the “set it and forget it” ideology that is promised by the wizards of automation. Great care should be taken when defining our problems. The quality we get out of an automated system is defined by the quality of what we put in.

To learn more about how Automata’s “agency” approach can help you build a custom solution to solve your marketing, sales, and intelligence goals, contact info@automata-solutions.com or fill out our Contact Form and let us know exactly you need.


About
Andrew Fraine, PhD is a Co-Founder and Director of Data Science at Automata. Get in touch at andrew.fraine@automata-solutions.com or on LinkedIn https://www.linkedin.com/in/andrew-fraine-45853517
Contact Us

We're not around right now. But you can send us an email and we'll get back to you, asap.

Not readable? Change text.