How data visualization enhances business decision making

How data visualization enhances business decision making

Data visualization is the art of our age.

Just as Michelangelo approached that giant block of Carrara marble and said, “I saw the angel in the marble and carved it until I set him free,” analysts are approaching data with the same visionary and inquisitive mind.

In today’s age, where big data reigns, the art of data analysis is making sense of our world.

Analysts are chiseling the bulk of raw data to create meaning—patterns and associations, maps and models— to help us draw insights, understand trends, and even make decisions from the stories the data tell.

Data visualization is the graphical display of abstract information for two purposes: sense-making (also called data analysis) and communication. Important stories live in our data and data visualization is a powerful means to discover and understand these stories, and then to present them to others.

But in presenting such complex information, data analysis is not easily computable to the human brain until it is presented in a data visualization.

Tables, charts, and graphs provide powerful representations of numerous data points so that the insights and trends are easily understood by the human brain.

That’s why data visualization is one of the most persuasive techniques to evangelize experimentation today—particularly in an era of ever-decreasing attention spans.

On a slide. On a dashboard in Google Data Studio. Or simply something you plan to sketch on a whiteboard. This presentation of the data will decide if your trends and insights are understood, accepted and inferences drawn as to what action should be taken.

A thoughtfully crafted visualization conveys an abundance of complex information using relatively little space and by leveraging our visual system—whether that’s the optimal number of lead generation form fields or the potential ROI of your program throughout the quarter.

In this post, we dig into the practice of designing data visualizations for your audience. You will learn:

  • How your data visualizations can enhance the Executive decision-making process, using the guidelines of the Cynefin Framework
  • Why data visualizations are the most powerful way for the human brain to compute complex information through dual processing theory
  • What makes data visualizations effective using the five qualities defined by expert Alberto Cairo
  • And a real-world example of how you can problem-solve a problem to result in the most effective data visualization for your audience.

The Brain (Or, why we need data visualization)

You may be familiar with System 1 and System 2 thinking, known as dual processing theory. System 1 (or Type 1) is the predominant fast, instinctual decision-making and System 2 (Type 2) is the slow, rational decision-making.

DualProcessTheory

Dual Process Theory

Dual Process Theory categorizes human thinking into two types or systems.

Share the insight:

We often relegate System 1 thinking to your audience’s emotions. (We talked about it in “Evangelizing experimentation: A strategy for scaling your test and learn culture” or in “I feel, therefore I buy: How your users make buying decisions.”)

But that immediate grasp over complex information in a data visualization is also related to System 1 thinking.

A large part of our brain is dedicated to visual processing. It’s instinctual. It’s immediate.

If you have a strong data visualization, every sighted person can understand the information at hand. A seemingly simple 5×5 chart can provide a snapshot of thousands of data points.

In other words, visualizing data with preattentive features in mind is akin to designing ergonomic objects: You know that a sofa is made for sitting. You know that a handle on a coffee mug is designed for your hand. (This is called preattentive processing.)

Preattentive processing occurs before conscious attention. Preattentive features are processed very quickly…within around 10 milliseconds.

When creating data visualizations, you are designing for human physiology. Any other method of translating that information is a disservice to your message and your audience.

When we consider the speed of which people understand the multiple data points in a problem through dual processing theory and preattentive processing, it’s almost foolish not to take advantage of data visualization.

When you design data visualizations, you are understanding your audience.

Understanding how Executives make decisions

A data visualization is a display of data designed to enable analysis, exploration, and discovery. Data visualizations aren’t intended mainly to convey messages that are predefined by their designers. Instead they are often conceived as tools that let people extract their own conclusions from the data.

Data analysis allows Executives to weigh the alternatives of different outcomes of their decisions.

And data visualizations can be the most powerful tool in your arsenal, because your audience can see thousands of data points on a simple chart.

Your data visualization allows your audience to gauge (in seconds!) a more complete picture so they can make sense of the story the data tell.

In Jeanne Moore’s article “Data Visualization in Support of Executive Decision Making,” the author explored the nature of strategic decision making through the Cynefin framework.

The Cynefin Framework

The Cynefin Framework aids Executives in determining how to best respond to situations by categorizing them in five domains: Simple, Complicated, Complex, Chaotic and Disorder. Source: HBR’s A Leader’s Framework for Decision Making

Share the insight:

The Cynefin Framework

The Cynefin Framework (pronounced ku-nev-in) allows business leaders to categorize issues into five domains, based on the ability to predetermine the cause and effect of their decisions.

Created by David Snowden in 1999 when he worked for IBM Global Services, the Cynefin framework has since informed leadership decision making at countless organizations.

The five domains of the Cynefin Framework are:

  • In the Simple Domain, there is a clear cause and effect. The results of the decision are easy to predict and can be based on processes, best practices, or historical knowledge. Leaders must sense, categorize and respond to issues.
  • In the Complicated Domain, multiple answers exist. Though there is a relationship between cause and effect, it may not be clear at first (think known unknowns). Experts sense the situation, analyze it and respond to the situation.
  • In the Complex Domain, decisions can be clarified by emerging patterns. That’s because issues in this domain are susceptible to the unknown unknowns of the business landscape. Leaders must act, sense and respond.
  • In the Chaotic Domain, leaders must act to establish order to a chaotic situation (an organizational crisis!), and the further gauge where stability exists and doesn’t exist to get a handle on the situation and move it into the complex or complicated domain.
  • And in the Disorder Domain, the situation cannot be categorized in any of the four domains. It is utterly an unknown territory. Leaders can analyze the situation and categorize different parts of the problem into the other four domains.

In organizations, decision making is often related to the Complex Domain because business leaders are challenged to act in situations that are seemingly unclear or even unpredictable.

Leaders who try to impose order in a complex context will fail, but those who set the stage, step back a bit, allow patterns to emerge, and determine which ones are desirable will succeed. They will discern opportunities for innovation, creativity, and new business models.

David J. Snowden and Mary E. Boone

Poor quarterly results, management shifts, and even a merger—these Complex Domain scenarios are unpredictable, with several methods of responding, according to David J. Snowden and Mary E. Boone.

In other words, Executives need to test and learn to gather data on how to best proceed.

Leaders who don’t recognize that a complex domain requires a more experimental mode of management may become impatient when they don’t seem to be achieving the results they were aiming for. They may also find it difficult to tolerate failure, which is an essential aspect of experimental understanding,” explains David J. Snowden and Mary E. Boone.

Probing and sensing the scenario to determine a course of action can be assisted by data analyst to understand collaboratively the historical and current information at hand—in order to guide the next course of action.

An organization should take little interest in evaluating — and even less in justifying — past decisions. The totality of its interest should rest with how its data can inform its understanding of what is likely to happen in the future.

Of course, there is always the threat of oversimplifying issues, treating scenarios like they have easy answers.

But even with situations in the other domains of the Cynefin Framework, data visualization can provide insight into next steps—if they meet certain criteria.

What makes an effective data visualization

The presenter of the visualization must also provide a guiding force to assist the executive in reaching a final decision, but not actually formulate the decision for the executive.

With data visualization, there will always be insightful examples and examples that clearly missed the mark.

Avinash Kaushik, in his Occam Razor’s article, “Closing Data’s Last-Mile Gap: Visualizing For Impact!” called the ability for data visualizations to influence the Executive’s decision-making process closing the “last-mile” gap.

It can take an incredible effort to gather, sort, analyze and glean insights and trends from your data. If your analysis is solid, if your insights and trends are enlightening, you don’t want to muddle your audience with a confusing data visualization.

Remember: a data visualization is only as impactful as its design is on your audience.

In terms of the value in data visualization, it must provide simplicity, clarity, intuitiveness, insightfulness, gap, pattern and trending capability in a collaboration enabling manner, supporting the requirements and decision objectives of the executive.

Alberto Cairo’s Five Qualities of Great Data Visualizations

Alberto Cairo, author of “The Truthful Art: Data, Charts, and Maps for Communication,” outlines five qualities of great data visualizations. Your data visualization should be:

  1. Truthful: It should be based on thorough and objective research—just as a journalist is expected to represent the truth to the best of their abilities, so too is the data analyst.
  2. Functional: It should be accurate and allow your audience to act upon your information. For instance, they can perceive the incremental gains of your experimentation program over time in a sloping trendline.
  3. Beautiful: It needs to be well-designed. It needs to draw in your audience’s attention through an aesthetically pleasing display of information.
  4. Insightful: It needs to provide evidence that would be difficult to see otherwise. Trends, insights, and inferences must be drawn by the audience, in collaboration with the data analyst.
  5. Enlightening: It needs to illuminate your evidence. It needs to enlighten your audience with your information in a way that is easy to understand.

When you nail down all five of these criteria, your data visualization can shift your audience’s ways of thinking.

It can lead to those moments of clarity on what action to take next.

So, how are these design decisions made in data visualization?

Here’s an example.

Free questionnaire

Improve your data visualization’s impact!

Know where and how to improve your data visualization’s ability to tell a story to your audience.

How we make decisions about data visualization: An example in process

A note on framing: While the chart and data discussed below are real, the framing is artificial to protect confidentiality. The premise of this analysis is that we can generate better experiment ideas and prioritize future experiments by effectively communicating the insights available in the data.

Lead generation forms.

You probably come across these all the time in your web searches. Some forms have multiple fields and others have few—maybe enough for your name and email.

Suppose you manage thousands of sites, each with their own lead generation form—some long and some short. And you want to determine how many of fields you should require from your prospects.

If you require too many form fields, you’ll lose conversions; too few, and you’ll lose information to qualify those prospects.

It’s a tricky situation to balance.

Like all fun data challenges, it’s best to pare the problem down into smaller, manageable questions. In this case, the first question you should explore is the relationship between the number of required fields and the conversion rate. The question is:

How do conversion rates change when we vary the number of required fields?

Unlike lead quality—which can be harder to measure and is appraised much further down the funnel—analyzing the relationship between the number of required fields and the number of submissions is relatively straightforward with the right data in hand. (Cajoling the analytics suite to provide that data can be an interesting exercise in itself—some will not do so willingly.)

So, you query your analytics suite, and (assuming all goes well), you get back this summary table:

WiderFunnel Data Visualization Examples
On this table, how immediately do you register the differences between the average conversion rates? Note how you process the information—it’s System 2 thinking.

What’s the most effective way to convey the message in this data?

Most of you probably glossed over the table, and truth be told, I don’t blame you—it’s borderline rude to expect anyone to try to make sense of these many variables and numbers.

However, if you spend half a minute or so analyzing the table, you will make sense of what’s going on.

In this table format, you are processing the information using System 2 thinking—the cognitive way of understanding the data at hand.

On the other hand, note how immediate your understanding with a simple data visualization…

The bar graph

WiderFunnel Data Visualization Examples Bar Graph
Compared to the table above, the decrease in conversion rate between one and four required fields is immediately obvious, as is the upward trend after four. Your quick processing of these differences is System 1 thinking.

In terms of grasping the relationship in the data, it was pretty effective for a rough-and-ready chart.

In less than a second, you were able to see that conversion rates go down as you increase the number of required fields—but only until you hit four required fields. At this point, average conversion rates (intriguingly!) start to increase.

But you can do better…

For a good data visualization, you want to gracefully straddle the line between complexity and understanding:

How can we add layers of information and aesthetics that enrich the data visualization, without compromising understanding?

No matter how clever the choice of the information, and no matter how technologically impressive the encoding, a visualization fails if the decoding fails.

Adding layers of information can’t be at the expense of your message—rather, it has to be in service of that message and your audience. So, when you add anything to the chart above, the key question to keep in mind is:

Will this support or undermine making informed business decisions?

In this case, you can have some fun by going through a few iterations of the chart, to see if any visualization works better than the bar chart.

The dot plot

Compared to a bar chart, a dot plot encodes the same information, while using fewer pixels (which lowers visual load), and unshackles you from a y-axis starting at zero (which is sometimes controversial, according to this Junk Charts article and this Stephanie Evergreen article).

In the context of digital experimentation, not starting the y-axis at zero generally makes sense because even small differences between conversion rates often translate into significant business impact (depending on number of visitors, the monetary / lifetime value of each conversion, etc.).

In other words, you should design your visualization to make apparent small differences in conversion rates because these differences are meaningful—in this sense, you’re using the visualization like researchers use a microscope.

If you are still not convinced, an even better idea (especially for an internal presentation) would be to map conversion rate differences to revenue—in that case, these small differences would be amplified by your site’s traffic and conversion goal’s monetary value, which would make trends easier to spot even if you start at 0.

Either way, as long as the dots are distant enough, large enough to stand out but small enough to not overlap along any axis, reading the chart isn’t significantly affected.

WiderFunnel Data Visualization Examples Dot Plot
Compared to the bar chart, the dot plot lowers the visual load, gives us flexibility with our y-axis (it does not start at 0), allowing us to emphasize the trend.

More importantly (spoiler alert!), our newly-found real estate (after changing from bars to dots) allows you to add layers of information without cluttering the data visualization.

One such layer is the data’s density (or distribution), represented by a density plot.

A density plot

A density plot uses the height of the curve to show roughly how many data points (what percentage of sites) require how many fields. In this case, the density plot adds the third column (“Percent of Sites”) from the table you saw earlier.

That makes it easy to see (once you understand how density plots work) how much stock to place in those averages.

For example, an average that is calculated on a small number of sites (say, less than 1% of the available data) is not as important or informative as an average that represents a greater number of sites.

So, if an average was calculated based on a mere ten sites, we would be more wary of drawing any inferences pertaining to that average.

WiderFunnel Data Visualization Example Plot Graph Density Plot
After adding the density plot, you can see that most sites require two fields, roughly the same require one and three, and after eight required fields, the distribution is pretty much flat—meaning that we don’t have many data points. So, those incredibly high conversion rates (relative to the rest) are fairly noisy and unrepresentative—something we’ll verify with confidence intervals later on.

Visualizing uncertainty and confidence intervals

When we add the density plot, we see that most of our data comes from sites that require between one and four fields (80%, if you added the percentages in the table), the next big chunk (19%) come from sites that require five to nine fields, and the remaining 1% (represented by the flat sections of the density curve) require more than nine. (The 80/20 rule strikes again!)

Another useful layer of information is the confidence interval for these averages. Given the underlying data (and how few data points go into some averages), how can we represent our confidence (or uncertainty) surrounding each average?

Explaining Confidence Intervals

If you’ve never encountered confidence intervals before, here’s a quick example to explain the intuition behind them…

Let’s say you’re taking a friend camping for three days, and you want to give them enough information so they can pack appropriately.

You check the forecast and see lows of 70°F, highs of 73°F, and an average of 72°F.

So, when you tell your friend “it’s going to be about 72°F“—you’re fairly confident that you’ve given them enough information to enjoy the trip (in terms of packing and preparing for the weather, of course).

On the other hand, suppose you’re camping in a desert that’s expecting lows of 43 °F, highs of 100°F, and (uh oh) an average of 72°F.

Assuming you want this person to travel with you again, you probably wouldn’t say, “it’s going to be about 72°F.” The information you provided would not support them in making an informed decision about what to bring.

That’s the idea behind confidence intervals: they represent uncertainty surrounding the average, given the range of the data, thereby supporting better decisions.

Visually, confidence intervals are represented as lines (error bars) that extend from the point estimate to the upper and lower bounds of our estimate: the longer the lines, the wider our interval, the more variability around the average.

When the data are spread out, confidence intervals are wider, and our point estimate is less representative of the individual points.

Conversely, when the data are closer together, confidence intervals are narrower, and the point estimate is more representative of the individual points.

WiderFunnel Data Visualization Examples Dot Plot Confidence Intervals
Once you add error bars, you can see that many of those enticingly high conversion rates are muddled by uncertainty: at twelve required fields, the conversion rate ranges from less than 10% to more than 17%! Though less extreme, a similar concern holds for data points at ten and eleven required fields. What’s happening at thirteen, though?

At this point, there are two things to note: first, when you look at this chart, your attention will most likely be drawn to the points with the widest confidence intervals.

That is, the noisiest estimates (the ones with fewer data points and / or more variability) take up the most real estate and command the most attention.

Obviously, this is not ideal—you want to draw attention to the more robust and informative estimates: those with lots of data and narrower intervals.

Second, the absence of a confidence interval around thirteen required fields means that either there’s only one data point (which is likely the case, given the density curve we saw earlier), or all the points have the same average conversion rate (not very likely).

Luckily, both issues have the same solution: cut them out.

How to best handle outliers is a lively topic—especially since removing outliers can be abused to contort the data to fit our desired outcomes. In this case, however, there are several good reasons to do so.

The first two reasons have already been mentioned—these outliers come from less than 1% of our entire data set: so, despite removing them, we are still representing 99% of our data.

Second, they are not very reliable or representative, as evidenced by the density curve and the error bars.

Finally, and more importantly—we are not distorting the pattern in the data: we’re still showing the unexpected increase in the average conversion rate beyond four required fields.

We are doing so, however, using the more reliable data points, without giving undue attention to the lower quality ones.

Lastly, to visualize and quantify our answer to the question that sparked the whole analysis (how do conversion rates change when we vary the number of required fields?), we can add two simple linear regressions: the first going from one to four required fields, the second from four to nine required fields.

Why two, instead of the usual one?

Because we saw from the density chart discussion that 80% of our data comes from sites requiring one to four fields, a subset that shows a strong downward trend.

Given the strength of that trend, and that it spans the bulk of our data, it’s worth quantifying and understanding, rather than diluting it with the upward trend from the other 20%.

That remaining 20%, then, warrants a deeper analysis: what’s going on there—why are conversion rates increasing?

The answer to that will not be covered in this article, but here’s something to consider: could there be qualitative differences between sites, beyond four required fields? Either way, the regression lines make the trends in the data clearer to spot.

WiderFunnel Data Visualization Examples Dot Plot Regression Line
The regression lines draw attention to the core trend in the data, while also yielding a rough estimate of how much conversion rates decrease with the increase in required fields.

After adding the regression line, you summarize the main take-away with a nice, succinct subtitle:

Increasing the number of Required Fields from one to four decreases average conversion rate by 1.2% per additional field, for 80% of sites.

This caption helps orient anyone looking at the chart for the first time—especially since we’ve added several elements to provide more context.

Note how the one statement spans the three main layers of information we’ve visualized:

  1. The average conversion rate (as point estimates)
  2. The distribution of the data (the density curve)
  3. The observed trend

Thus, we’ve taken a solid first pass at answering the question:

How do conversion rates change when we vary the number of required fields?

Does this mean that all sites in that 80% will lose ~1% conversion rate for every required field after the first?

Of course not.

As mentioned in the opening section, this is the simplest question that’ll provide some insight into the problem at hand. The lowest-hanging fruit, if you will.

However, it is far from a complete answer.

You’ve gently bumped into the natural limitation of bivariate analyses (an analysis with only two variables involved): you’re only looking at the change in conversion rate through the lens of the number of required fields, when there are obviously more variables at play (the type of site, the client base, etc.).

Before making any business decisions, you would need a deeper dive into those other variables, (ideally!) incorporate lead quality metrics, to have a better understanding of how the number of required fields impacts total revenue.

And this is where you come back full circle to experimentation: you can use this initial incursion to start formulating and prioritizing better experiment ideas.

For example, a successful experimentation strategy in this context would have to, first, better understand the two groups of sites discussed earlier: those in the 80% and those in the other 20%.

Additionally, more specific tests (i.e., those targeting sub-domains) would have to consider whether a site belongs to the first group (where conversion rates decrease as the number of required fields increase) or the second group (where the inverse happens)—and why.

Then, we can look at which variables might explain this difference, and what values these variables take for that site.

For example, are sites in the first group B2C or B2B? Do they sell more or less expensive goods? Do they serve different or overlapping geographic regions?

In short, you’ve used data visualization to illuminate a crucial relationship to stakeholders, and to identify knowledge gaps when considering customer behaviour across a range of sites.

Addressing these gaps would yield even more valuable insights in the iterative process of data analysis.

And these insights, in turn, can guide the experimentation process and improve business outcomes.

Your audience needs to trust your data visualization—and you.

When your experimentation team and Executives can get into the boardroom together, it’s disruptive to your business. It shakes your organization from the status quo, because it introduces new ways of making decisions.

Data-driven decisions are proven to be more effective.

In fact, The Sloan School of Business surveyed 179 large publicly traded firms and found that those that used data to inform their decisions increased productivity and output by 5-6%.

And data analysts have the power to make decision-making among Executive teams more informed.

Relying not on the Executive’s ability to rationalize through the five domains of the Cynefin Framework, data visualization presents the true power of experimentation. And the ability for experimentation to solve real business problems.

But like any working dynamic, you need to foster trust—especially when you are communicating the insights and trends of data. You need to appear objective and informed.

You need to guide your audience through the avenues of action that are made clear by your analysis.

Of course, you can do this through speech. But you can also do this through the design of your data visualizations.

WiderFunnel Data Visualization Examples Dashboard
Data visualizations help your Executive team keep a pulse on what is happening in your experimentation program and allow them to understand how it can impact internal decision making.

Whether you are presenting them in a dashboard where your team can keep a pulse on what’s happening with your experimentation program, or if it’s a simple bar graph or dot plot in your slide deck, your data visualizations matter.

Clear labeling and captions, graphic elements that showcase your data dimensions, lowering visual load, and even using color to distinguish elements in your data visualization—these help your audience see what possibilities exist.

They help your audience identify patterns and associations—and even ask questions that can be further validated through experimentation.

Because experimentation takes the guesswork out of decision making. Your data visualizations make it easier for the Executive to navigate the complexity of situations they are challenged today.

And that is, ultimately, the most persuasive way to evangelize experimentation at your organization.

How impactful have you found strong data visualizations on your team’s decision-making process? We’d love to hear about it.

Author

Wilfredo Contreras

Senior Data Analyst

Contributors

Lindsay Kwan

Marketing Communications Specialist

Benchmark your experimentation maturity with our new 7-minute maturity assessment and get proven strategies to develop an insight-driving growth machine.

Get started

Source link

Confirm the integrity of your data

Confirm the integrity of your data

Never before has there been a greater need for a reliable, holistic marketing measurement tool. In a world of fractured media and consumer interest, intense competitive pressure, and lightening-speed product innovation, the sheer volume of data that must be analyzed and the decisions that must be made demand a more evolved approach to attribution and decision making. This need for speed has brought into bright focus a mandate for reliable, consistent and valid data, and the potential for challenges when there are errors.

The attribution category has been evolving quickly over the past decade, and there are myriad options from which marketers can choose. Recent research conducted by Forrester suggests that leading marketers are adopting the newest and most advanced approach: Unified Measurement or Total Marketing Measurement models. This analysis combines the attributes of person-level measurement with the ability to measure traditional channels such as TV. Marketers who upgrade to and invest in novel solutions – financially and organizationally – can find a competitive advantage from smarter attribution.

The greatest of these instruments answer problems such as the optimal frequency and reach in and between channels and determine which messages and creative are best for which audiences. New advances in these products are providing even more granular insights concerning message sequencing, and next-best message decisioning based on specific audiences and multiple stages of their buying processes. The best of these solutions incorporate external and environmental circumstances such as weather, travel patterns and more. Furthermore, capabilities of today’s solutions produce insights in such a timely fashion that agile marketers can include those insights into active campaigns to drive massive performance gains, rather than waiting for weeks or months to see returns.

However while these attribution models have evolved a long way in recent years, there is one challenge that all must tackle: the need for reliable, consistent and valid data. Even the most advanced and powerful of these systems are dependent on the quality of the information they ingest. Incorrect or sub-par input will always produce the wrong outputs. Data quality and reliability have become a primary focus of marketing teams and the forward-thinking CMOs who lead them.

If the data are not accurate, it doesn’t matter what statistical methods or algorithms we apply, nor how much experience we have in interpreting data. If we start with imperfect data, we’ll end up with erroneous results. Basing decisions on a conclusion derived from flawed data can have costly consequences for marketers and their companies. Inaccurate data may inflate or give undue credit to a specific tactic. For example, a model may indicate that based on a media buy a television advertisement –usually one of the most expensive of our marketing efforts – was responsible for driving an increase in visitors to our website. But, if this ad failed to air, and there is inaccurate data in a media log, the team may wrongly reallocate budget to their television buy. This would be a costly mistake.

In fact, inaccurate data may be one of the leading causes of waste in advertising. These inaccuracies have become an epidemic that negatively impacts both advertisers and the consumers they are trying to reach. Google recently found that, due in large part to bad data, more than 56 percent of ad impressions never actually reach consumers, and Proxima estimates $37 billion of worldwide marketing budgets go to waste on poor digital performance. And that’s just digital. The loss for major players who market online and offline can be extensive, and it’s calling for a revolutionary new approach to data quality and reliability.

So, how accurate is your data? Do you know if there are gaps? Are there inconsistencies that may queer your results? Many of us put a great deal of trust in our data systems leaving us forgetting to ask these critical questions. You can’t just assume you have accurate data – now more than ever you must know you do. That may require some work up front, but the time you invest in ensuring accurate data will pay off in better decisions and other significant improvements. Putting in place, from the start and early in the process, steps and checks to ensure the timely and accurate reporting of data is key to avoiding costly mistakes down the road. Solving these problems early in your attribution efforts helps build confidence in the optimization decisions you’re making to drive higher return on investment and, perhaps more importantly, will help teams avoid taking costly missteps.

When it comes to attribution, it is especially critical to make sure the system you are relying on has a process for analyzing and ensuring that the data coming in is accurate.

Below are four key considerations, when working with your internal analytics staff, agencies, marketing team and attribution vendor, you can use to unlock more positive data input and validation to ensure accurate conclusions.

1. Develop a data delivery timetable

The entire team should have a clear understanding of when data will be available and, more importantly, by what date and or time every data set will arrive. Missing or unreported data may be the single most significant threat to drawing accurate conclusions. Like an assembly line, if data fails to show up on time, it will stop production for the entire factory. Fortunately, this may also be one of the easiest of the challenges to overcome. Step one is to conduct an audit of all the information you are currently using to make decisions. Map the agreed upon or expected delivery date for every source. If you receive a weekly feed of website visitors, on what day does it typically arrive? If your media agency sends a monthly reconciliation of ad spend and impressions, what is the deadline for its delivery?

Share these sources of information and the schedule of delivery with your attribution vendor. The vendor, in turn, should develop a dashboard and tiered system of response for data flow and reporting. For example, if data is flowing as expected, the dashboard may include a green light to indicate all is well. If the information is a little late, even just past the scheduled date but within a predefined window of time, the system should generate a reminder to the data provider or member of the team who is responsible for the data letting them know that there may be a problem. However, if data is still missing past a certain point, threatening the system’s ability to generate optimizations, for example, an alert should be sent to let the team know that action is needed.

2. Create standard templates for routinely reported data

You, members of your team, and your attribution partner need a clear understanding of what specific data is included in which report and in what formats. It would be a shame to go through the hard work of making sure your information is arriving on time only to find out that the data is incomplete or reported inconsistently. To use the assembly line analogy again, what good is it to make sure a part arrives on time if it’s the wrong part that’s delivered?

Like quality control or a modern-day retinal scan, the system should check to see if the report matches expected parameters. Do the record counts match the number of records you expected to receive? If data from May was expected, do the dates make sense? And, is all the information that should be in the report included? Are there missing data?

With this system in place, a well-configured attribution solution or analytics tool should be able to test incoming data for both its completeness and compliance with expected norms. If there are significant gaps in the data or if data deviates overmuch from an acceptable standard, the system can again automatically alert the team that there may be a problem.

3. Use previous data from the source to confirm new data

Your attribution provider should be able to use data previously reported from a source to help identify any errors or gaps in the system. For example, you can include in your data feed multiple weeks or months of previously reported data. This feed will produce one new set of data and three previous sets of overlapping data. If overlapping data does not match that will trigger an alert.

Now you’ll want to determine if the data makes sense. You want to see if new data is rational and consistent with that which was previously reported. This check is a crucial step in using previously published data to confirm the logic of more recent data reported.

Here, too, you can check for trends over time to see if data is consistent or if there are outliers. Depending on the specific types of media or performance being measured a set of particular logic tests should be developed. For example, is the price of media purchased within the range of what is typically paid? Is the reach and frequency per dollar of the media what was expected?

Leading providers of marketing attribution solutions are continually performing these checks to ensure data accuracy and consistent decision making. With these checks in place, the marketing attribution partner can diagnose any problems, and the team can act together to fix it. This technique has the added benefit of continuously updating information to make sure errors, or suspicious data, don’t linger to confound ultimate conclusions.

One note here that should be taken into account: outliers are not necessarily pieces of bad data. Consider outliers as pieces of information that have not yet been confirmed or refuted. It is a best practice to investigate outliers to understand their source, or hold them in your system to see if they’re not the beginnings of a new trend.

4. The benefit of getting information from multiple sources

Finally, there are tangible benefits to confirming data from multiple data sets. For example, does the information about a customer contained in your CRM conform with the information you may be getting from a source like Experian? Does data you’re receiving about media buys and air dates match the information you may be receiving from Sigma encoded monitoring?

Even companies that are analytics early adopters find themselves challenged to ensure the data upon which they rely is consistent, reliable and accurate. Marketers understand that they have to be gurus of data-driven decision making, but they can’t just blindly accept the data they are given.

Remember, as we have mentioned, despite the potential benefits of a modern attribution solution, erroneous data ensures their undoing. To be certain your process is working precisely, create a clear understanding of the data and work with a partner who can build an early warning system for any issues that arise. Ultimately, this upfront work ensures more accurate analysis and will help achieve the goal of improving your company’s marketing ROI.

As a very first step, since data may come from multiple departments inside the company and various agencies that support the team, develop a cross-functional steering committee consisting of representatives from analytics, marketing, finance, as well as digital and traditional media agencies; the steering committee should have a member of the team responsible for overall quality and flow. As a team, work together to set benchmarks for quality and meet regularly to discuss areas for improvement.

In this atmosphere of fragmented media and consumer (in)attentiveness, those who rely on data-driven decision-making will gain a real competitive advantage in the marketplace. Capacities of today’s solutions produce insights in such a timely fashion that the nimblest marketers can incorporate those insights into active campaigns to drive massive performance improvements, rather than waiting for weeks or months to see results. But the Achilles heel of any measurement system is the data upon which it relies on generating insight. All other things being equal, the better the data going in, the better the optimization recommendations coming out.


Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.


About The Author

Rex Briggs is Founder and CEO of Marketing Evolution and has more than 20 years of experience in research and analytics. Rex focuses on omni-channel personal level marketing attribution and optimization. He served on the review board of JAR, and serves on Research World’s editorial board. He is the best-selling author of two books, “What Sticks, Why Advertising Fails and How to Guarantee Yours Succeeds” (2006) and “SIRFs Up, The Story of How Algorithms And Software Are Changing Marketing.”

Source link

4 simple ways small businesses can use data to build better customer relationships

4 simple ways small businesses can use data to build better customer relationships

In a world where customers are bombarded across every possible channel with brand messages, targeting is more important than ever before. Small businesses need to be able to make their campaigns feel relevant and personal in order to keep up, but the processes involved – collecting, organizing and interpreting customer data to make it actionable – are often intimidating to small businesses and solo entrepreneurs with limited time and resources.

Collecting, organizing and learning from your customer data is critical no matter how large your team is or what stage of growth you’re in. In fact, there’s no better time to consider your processes for data than when you’re just starting out. And getting started with basic strategies for building customer relationships doesn’t have to be difficult – there are some simple steps you can take to save yourself a lot of time as your business grows and scales.

From the moment you start your business and establish an online presence, you should be laying the groundwork for effective CRM strategies. This includes: establishing a single-source of truth for your customer data, being thoughtful and organized about how you collect information and setting up the right processes to interpret that data and put it to work for your marketing. Here are some actionable steps (with examples) to take now:

  • Collect: Make sure you’re set up to onboard people who want to be marketed to. Whether you’re interacting online or in person, you should be collecting as many insights as possible (for example, adding a pop-up form to your website to capture visitors, or asking people about their specific interests when they sign up for your email list in store) and consolidating them so you can use them to market.
  • Organize: Once you have this data, make sure you’re organizing it in a way that will give you a complete picture of your customer, and make it easy to access the insights that are most important for your business to know. Creating a system where you can easily sort your contacts based on shared traits – such as geography, purchase behaviors or engagement levels – will make it much easier to target the right people with the right message.
  • Find insights: Find patterns in data that can spark new ideas for your marketing. For example, the realization that your most actively engaged customers are in the Pacific Northwest could lead to a themed campaign targeting this audience, a plan for a pop-up shop in that location or even just help you plan your email sends based on that time zone.
  • Take action: Turn insights into action, and automate to save time. As you learn more about your audience and what works for engaging them, make sure you’re making these insights scalable by setting up automations to trigger personalized messages based on different demographic or behavioral data.

Doing this right won’t just result in more personalized marketing campaigns and stronger, more loyal customer relationships – it will also help you be smart about where you focus your budget and resources as you continue to grow.


Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.


About The Author

As VP of Marketing, Darcy Kurtz leads Mailchimp’s product marketing team. Her team aligns product strategy with marketing execution to make Mailchimp’s sophisticated marketing technology accessible for small businesses worldwide. Darcy joined Mailchimp with more than 25 years of experience leading global marketing at companies like Dell, Sage and Outsystems. She has a career-long passion for serving small businesses.

Source link