Google Ads store visits, store sales reporting data partially corrected

Google Ads store visits, store sales reporting data partially corrected

Another update in the Google Ads reporting bug saga that started on May 2. There’s progress on the store visits and store sales data inaccuracies.

The latest update. Store sales and store visits data has been fixed for April 28 and 29, as well as May 3 onward. “We are actively working on correcting the data for April 30, May 1 and May 2 (all dates in PDT),” Google said in an update to its blog post on the bug Friday.

Recap on what data is still incorrect. Based on Friday’s update, here’s what is still inaccurate:

  • Store visits and store sales: April 30, May 1, May 2
  • All other reporting: 12:01 a.m. on May 1 to 4 a.m. May 2 (PDT)

Why we should care. If you’re counting on complete data and report on store visits or store sales, you’ll want to keep holding off on April reporting. All other advertisers will not have accurate weekly reporting for last week.

We don’t recall a reporting glitch in Google Ads going unresolved for such a long period of time. Advertisers weren’t alerted to the store conversion data problems until six days after the bug was initially reported. As a reminder, this bug only affects reporting, not any automated bidding models.


About The Author

Ginny Marvin is Third Door Media’s Editor-in-Chief, managing day-to-day editorial operations across all of our publications. Ginny writes about paid online marketing topics including paid search, paid social, display and retargeting for Search Engine Land, Marketing Land and MarTech Today. With more than 15 years of marketing experience, she has held both in-house and agency management positions. She can be found on Twitter as @ginnymarvin.

Source link

Google Search adds support for FAQ and How-to structured data

Google Search adds support for FAQ and How-to structured data

At Google I/O just now, Google announced support for new structured data for FAQ and How-to markup. Yes, Google announced FAQ and How-to markup at the 2018 Google I/O event a year ago, but now Google has launched new structured markup to bring these rich search results to life.

How-to results. How-to search results in Google will show searchers step-by-step information on how to accomplish specific tasks directly in the search results. Google has published how-to documentation for your developers to use when adding the markup to your own pages and also how to add this to Google Assistant. The documentation includes information on the steps, tools, duration, and other properties you would include in your markup.

Here are screen shots of what it looks like in search:

Here are screen shots of what this looks like in the Google Assistant:

How-to Search Console report. Google also added a new How-to enhancement report in Search Console that shows you your errors, warnings and valid items for pages with HowTo structured data. Here is a screen shot of this report.

FAQ results. Google also announced new search results for FAQs in search and Google Assistant. This is designed for FAQ pages that provide a list of frequently asked questions and answers on a particular topic. Adding this structure data helps Google show questions and answers directly on Google Search and the Assistant. The documentation for this markup can be found here and for Assistant can be found here.

Google warns you not to confuse this with QA pages, which is devoted to forums or other pages where users can submit answers to questions.

Here is how FAQ results look in search:

Here is how FAQ results look in Assistant:

Google also launched a Google Search Console enhancement report for FAQ structured data.

Why we care. Making your search results more prominent in Google search may drive more clicks to your web site from Google search. But at the same time, if the searcher can get the full answer directly in Google search or on Google Assistant, you may never see that searcher end up on your web site. We recommend testing this markup on the relevant pages and see if it leads to more traffic and conversions for your business.


About The Author

Barry Schwartz is Search Engine Land’s News Editor and owns RustyBrick, a NY based web consulting firm. He also runs Search Engine Roundtable, a popular search blog on SEM topics.

Source link

Five ways SEOs can utilize data with insights, automation, and personalization Search Engine Watch

Five ways SEOs can utilize data with insights, automation, and personalization.

Five ways SEOs can utilize data with insights, automation, and personalization.

Constantly evolving search results driven by Google’s increasing implementation of AI are challenging SEOs to keep pace. Search is more dynamic, competitive, and faster than ever before.

Where SEOs used to focus almost exclusively on what Google and other search engines were looking for in their site structure, links, and content, digital marketing now revolves solidly around the needs and intent of consumers.

This past year was perhaps the most transformative in SEO, an industry expected to top $80 billion in spending by 2020. AI is creating entirely new engagement possibilities across multiple channels and devices. Consumers are choosing to find and interact with information by voice search, or even on connected IoT appliances, and other devices. Brands are being challenged to reimagine the entire customer journey and how they optimize content for search, as a result.

How do you even begin to prioritize when your to-do list and the data available to you are growing at such a rapid pace? The points shared below intend to help you with that.

From analysis to activation, data is key

SEO is becoming less a matter of simply optimizing for search. Today, SEO success hinges on our ability to seize every opportunity. Research from my company’s Future of Marketing and AI Study highlights current opportunities in five important areas.

1. Data cleanliness and structure

As the volume of data consumers are producing in their searches and interactions increases, it’s critically important that SEOs properly tag and structure the information we want search engines to match to those queries. Google offers rich snippets and cards that enable you to expand and enhance your search results, making them more visually appealing but also adding functionality and opportunities to engage.

Example of structured data on Google

Google has experimented with a wide variety of rich results, and you can expect them to continue evolving. Therefore, it’s best practice to properly mark up all content so that when a rich search feature becomes available, your content is in place to capitalize on the opportunity.

You can use the Google Developers “Understand how structured data works” guide to get started and test your structured data for syntax errors here.

2. Increasingly automated actionable insights

While Google is using AI to interpret queries and understand results, marketers are deploying AI to analyze data, recognize patterns and deliver insights as output at rates humans simply cannot achieve. AI is helping SEOs in interpreting market trends, analyzing site performance, gathering and understanding competitor performance, and more.

It’s not just that we’re able to get insights faster, though. The insights available to us now may have gone unnoticed, if not for the in-depth analysis we can accomplish with AI.

Machines are helping us analyze different types of media to understand the content and context of millions of images at a time and it goes beyond images and video. With Google Lens, for example, augmented reality will be used to glean query intent from objects rather than expressed words.

Opportunities for SEOs include:

  • Greater ability to define opportunity space more precisely in a competitive context. Understand underlying need in a customer journey
  • Deploying longer-tail content informed by advanced search insights
  • Better content mapping to specific expressions of consumer intent across the buying journey

3. Real-time response and interactions

In a recent “State of Chatbots” report, researchers asked consumers to identify problems with traditional online experiences by posing the question, “What frustrations have you experienced in the past month?”

Screenshot of users' feedback on website usage experiences

As you can see, at least seven of the top consumer frustrations listed above can be solved with properly programmed chatbots. It’s no wonder that they also found that 69% of consumers prefer chatbots for quick communication with brands.

Search query and online behavior data can make smart bots so compelling and efficient in delivering on consumer needs that in some cases, the visitor may not even realize it’s an automated tool they’re dealing with. It’s a win for the consumer, who probably isn’t there for a social visit anyway as well as for the brand that seeks to deliver an exceptional experience even while improving operational efficiency.

SEOs have an opportunity to:

  • Facilitate more productive online store consumer experiences with smart chatbots.
  • Redesign websites to support visual and voice search.
  • Deploy deep learning, where possible, to empower machines to make decisions, and respond in real-time.

4. Smart automation

SEOs have been pretty ingenious at automating repetitive, time-consuming tasks such as pulling rankings reports, backlink monitoring, and keyword research. In fact, a lot of quality digital marketing software was born out of SEOs automating their own client work.

Now, AI is enabling us to make automation smarter by moving beyond simple task completion to prioritization, decision-making, and executing new tasks based on those data-backed decisions.

Survey on content development using AI

Content marketing is one area where AI can have a massive impact, and marketers are on board. We found that just four percent of respondents felt they were unlikely to use AI/deep learning in their content strategy in 2018, and over 42% had already implemented it.

In content marketing, AI can help us quickly analyze consumer behavior and data, in order to:

  • Identify content opportunities
  • Build optimized content
  • Promote the right content to the most motivated audience segments and individuals

5. Personalizations that drive business results

Personalization was identified as the top trend in marketing at the time of our survey, followed closely by AI (which certainly drives more accurate personalizations). In fact, you could argue that the top four trends namely, personalization, AI, voice search, and mobile optimization are closely connected if not overlapping in places.

Across emails, landing pages, paid advertising campaigns, and more, search insights are being injected into and utilized across multiple channels. These intend to help us better connect content to consumer needs.

Each piece of content produced must be purposeful. It needs to be optimized for discovery, a process that begins in content planning as you identify where consumers are going to find and engage with each piece. Smart content is personalized in such a way that it meets a specific consumer’s need, but it must deliver on the monetary needs of the business, as well.

Check out these 5 steps for making your content smarter from a previous column for more.

How SEOs are uniquely positioned to drive smarter digital marketing forward

As the marketing professionals have one foot in analysis and the other solidly planted in creative, SEOs have a unique opportunity to lead smart utilization and activation of all manners of consumer data.

You understand the critical importance of clean data input (or intelligent systems that can clean and make sense of unstructured data) and differentiating between first and third-party data. You understand economies of scale in SEO and the value in building that scalability into systems from the ground up.

SEOs have long nurtured a deep understanding of how people search for and discover information, and how technology delivers. Make the most of your current opportunities by picking your low-hanging fruit opportunities for quick wins. Focus your efforts on putting the scalable, smart systems in place that will allow you to anticipate consumer needs, react quickly, report SEO appropriately, and convey business results to the stakeholders who will determine budgets in future.

Jim Yu is the founder and CEO of leading enterprise SEO and content performance platform BrightEdge. He can be found on Twitter .

You might like to read these next:

Related reading

How to speed up SEO analysis API advantages for SEO experts (with bonus)
Common technical SEO issues and fixes, for aggregators and finance brands
faceted navigation in ecommerce
marketing automation for SEOs, five time-saving strategies

Source link

Morgan Stanley holds top spot as activist defense firm: data

Morgan Stanley holds top spot as activist defense firm: data

BOSTON (Reuters) – Morgan Stanley was ranked as the top adviser to companies targeted by activist investors publicly for the third straight year in 2018 while Goldman Sachs vaulted past two competitors to the number No. 2 spot, according to Refinitiv data published on Thursday.

A sign is displayed on the Morgan Stanley building in New York U.S., July 16, 2018. REUTERS/Lucas Jackson

In 2018, Morgan Stanley advised on 22 campaigns, working with Akamai Technologies, SandRidge Energy and Cigna when those companies faced pressure from prominent agitators such as Elliott Management and Carl Icahn, the data showed.

Unlike announced mergers and acquisitions, many companies that fend off activists do so quietly and do not want their advisers making the situations public. This can create discrepancies in the data gathered in the league tables.

Goldman Sachs advised on 18 public campaigns in 2018. In 2017 Goldman advised on six public deals, trailing Morgan Stanley, Lazard and Raymond James, claiming the fourth spot, Refinitiv data shows.

Lazard dropped to the number three spot in 2018, advising 16 companies. In 2017, a less busy year overall, Lazard advised 14 companies.

Spotlight Advisors, founded by Greg Taxin, a lawyer who worked as an investment banker at Goldman Sachs and Banc of America Securities, made its first appearance on the list, capturing the No. 4 four spot ahead of UBS, Citi, Raymond James, Credit Suisse and Moelis & Co.

Activists were busier than ever last year and launched 500 campaigns, 5 percent more than in 2017. They pushed companies to spin off divisions and asked for board seats, among other demands.

Consumer cyclical companies were the most heavily targeted last year, Refinitiv said, with 90 campaigns in the sector. One prominent campaign was at Campbell Soup Co, where Daniel Loeb’s Third Point tried to replace all directors and initially pushed for a sale of the company.

Elliott Management, which launched campaigns at BHP Billiton Ltd, Qualcomm Inc, Bayer AG and Pernod Ricard last year, was ranked as the busiest activist, having launched 27 campaigns in 2018.

It beat out GAMCO Investors for the top spot.

Starboard Value, ranked as the third-busiest activist with 11 campaigns in 2018.

Innisfree and Okapi were the top proxy solicitors, firms hired to gather shareholders’ votes, while Olshan Frome Wolosky beat out two competitors to rank as the busiest law firm with 101 mandates working for activists.

Reporting by Svea Herbst-Bayliss; Editing by Dan Grebler

Source link

Leveraging data driven marketing to gain a competitive advantage like retailer SpearmintLOVE

Leveraging data driven marketing to gain a competitive advantage like retailer SpearmintLOVE

Before SpearmintLOVE became a powerhouse brand offering children’s clothing and accessories, it was a blog with a devoted social media following. Founder Shari Lott launched the SpearmintLOVE brand in 2013, capitalizing on the audience she’d amassed (much in the same way Glossier founder Emily Weiss used her blog audience at Into the Gloss as a launching pad for a line of beauty products.)

Since then, we’ve seen brands across many different verticals taking a similar approach in pairing content with an e-commerce component, and that content and commerce trend is continuing to grow.

But in the unique case of SpearmintLOVE, the marketing genius didn’t stop there.

When Shari’s husband John came on board and brought along his expertise in finance, optimization and improved return on investment (ROI) quickly became a top priority for the business. John knew he wanted to put their customer data to work, so he started using a cohort analysis leveraging the brand’s advertising data to study customer behaviors, trends, and patterns over a set period of time.

What is a cohort analysis?

Cohort analysis is a subset of behavioral analytics that takes the data from a given data set (e.g. an e-commerce platform, web application, or online game) and rather than looking at all users as one unit, it breaks them into related groups for analysis. These related groups, or cohorts, usually share common characteristics or experiences within a defined time-span.

Cohort analysis allows a company to “see patterns clearly across the life-cycle of a customer (or user), rather than slicing across all customers blindly without accounting for the natural cycle that a customer undergoes.” By seeing these patterns of time, a company can adapt and tailor its service to those specific cohorts.

Read more here.

With this rich customer data, SpearmintLOVE was able to deliver better, more relevant marketing materials to their audience because they had a clearer idea of what different customer groups needed and when.

The approach paid off in a big way: They saw a 47% decrease in cost per purchase, lowered cost per conversion down to $.11, and started earning an average 34% return on their Instagram ad spending over a 60-day period. Their data-driven approach was so effective that Instagram even went on to feature it as a remarkable success story on the company website.

Data driven marketing success SpearmintLOVE-min
Image source.

So what does this tell us?

In short: Rich customer data is what lives at the heart of any optimized and highly-functioning marketing strategy.

In this post, we’ll look at how you too can build a data driven marketing approach and ensure you’re leveraging testing and experimentation for an effective and fully-optimized strategy.

Building a data driven marketing approach

When it comes to building a data driven marketing approach, the first step should be planning and documentation. The surprising thing is: Many marketers skip this part completely.

Research shows that less than 50% of marketers have a documented marketing strategy—which means that even fewer have benchmarks or reporting metrics in place that help gauge their efforts. As a result, marketers often end up making decisions based on assumptions, estimations, and guesswork, rather than data and hard numbers. This leads to underperformance, poor ROI, and wasted resources in the marketing department.

Documented marketing strategy stats
Image source.

But it doesn’t have to be that way.

Thanks to advances in technology that make it easier to collect, visualize, and leverage data, you can create a data driven marketing approach and make decisions based on real customer data. Pairing customer data with experimentation, you can generate actionable customer insights that lead to optimized conversion rates, more relevant, effective marketing materials, and an increased bottom line.

Customer data fuels experimentation
Customer data generates insights, and experimentation helps you validate those insights. It is an ongoing process.

It works, too. Insights-driven organizations are seeing this approach pay off. E-commerce retailer and manufacturer weBoost, for example, saw a 100% lift in year-over-year conversion rate by optimizing their site through experimentation. What’s more: this process also produced a 41.4% increase in completed orders for homepage visitors.

Your marketing team can do this too, but it takes a willingness to experiment and break from the ‘this is how we’ve always done things’ mentality that so often stunts growth and limits opportunities for businesses.

Next, let’s talk more about experimentation and look at a few key elements of a strategy that takes a data-driven approach.

Key elements of a data driven marketing strategy

Not all data driven marketing strategies are created equal. In our experience, the most effective approaches tie in a few signature elements.

1. A process that promotes the Zen Marketing Mindset

Yin and yang of marketing
The Zen Marketing Mindset embraces both the creative and scientific sides of marketing.

Order of operations can be tricky when you’re working to make your customer data actionable. However, with experimentation, you can feed data into hypotheses and actually validate whether or not your ideas work.

To do this, we recommend leveraging a process that marries data and creative thinking with validation. At WiderFunnel, we use the Infinity Optimization Process™, which is a structured approach to experimentation strategy and execution. This multi-step process helps add structure and logic to testing efforts and minimizes false assumptions. That means more accurate experiment outcomes and more validated marketing messages that translate into bottom-line impact.

WiderFunnel Infinity Optimization Process
The Infinity Optimization Process™

The approach is highly effective because it covers the two main sides of marketing: the intuitive, qualitative, exploratory side that imagines potential insights, as well as the quantitative, logical, data-driven, validating side that backs up outcomes with hard numbers. Paired together, they provide meaningful insights that boost marketing efficiency.

2. Exploration to power experimentation

Exploration in this context refers to information gathering and data collection. It is a major part of any successful marketing strategy, as it can help marketers find and develop the most impactful insights.

For exploration to be effective, it’s important to be sure you are considering both qualitative and quantitative data sources. Again, this is where the Infinity Optimization Process comes in handy.

IOP Explore components
The Infinity Optimization Process Explore phase.

The Explore phase focuses on gathering information through many different sources and then prioritizing this information for ideation. In this process, all information collection is centered around the LIFT Model®, which is a framework for understanding your customers’ barriers to conversion and potential opportunities.

In the case of SpearmintLOVE, the insights derived from their cohort analysis would be considered in Explore.

Workbook

How to turn your customer insights into revenue driving experiments

Categorize, organize, and put your customer data into action with this 28-page workbook. Dig into the Explore phase and learn how to translate your data and insights into experiment hypotheses.

How SpearmintLOVE leverages data driven marketing

We see an example of data driven marketing in action when we look at SpearmintLOVE’s use of cohort analysis, wherein quantitative data is paired with qualitative data to inform experiments.

By testing different products, messaging, and imagery based on the customer lifecycles they’d uncovered via cohort analysis, SpearmintLOVE was able to improve the effectiveness of their efforts and lower costs associated with marketing. By constantly testing and improving their marketing strategy based on real customer data, they’ve found success with customer-centric marketing that both resonates and produces meaningful results.

Bonus: this approach also helped them solve a major problem.

John noticed that within customer cohorts, there was a recurring drop-off in advertising ROI that he couldn’t explain. After studying the data, the answer occurred to him: Their audience’s needs were changing, but the marketing was staying the same.

“Data showed us that our customers were changing,” John Lott told BigCommerce. “We learned that our customers were evolving into different life stages. It took us six months to figure that out. Insights are funny that way.”

So what was the issue?

The young parents SpearmintLOVE initially attracted had newborn babies…but eventually those babies grew into toddlers. Which meant shoppers needed to be shown products for older children with different needs. But because SpearmintLOVE was still promoting products and messages for new/first-time parents, they were seeing dramatic drop off in marketing effectiveness as babies got older and the parents’ needs changed.

What we can take from this lesson: Illustrative data and a structured approach helps brands build stronger emotional connections with customers and get a deep understanding of what they both want and need.

Making data driven marketing work for you

When we look at brands like SpearmintLOVE who are seeing incredible success, we see common themes around what’s happening behind the scenes: Customer obsession and data-centric decision-making.

And it’s working. By leaning on data and giving customers what they want, SpearmintLOVE was able to grow its revenue by a whopping 1,100% in just one year.

The question is: What’s keeping you from doing the same?

In future editions of this series, we’ll continue to explore how different brands are executing customer-centric experiences via feedback collection, customer support insights, analytics, and experimentation.

If you’re curious about how to step up your company’s customer experience strategy and get on the level of brands making waves (and money), stay tuned and sign up to get future editions in your inbox.

Author

Kaleigh Moore

WiderFunnel Contributor

Benchmark your experimentation maturity with our new 7-minute maturity assessment and get proven strategies to develop an insight-driving growth machine.

Get started

Source link

How data visualization enhances business decision making

How data visualization enhances business decision making

Data visualization is the art of our age.

Just as Michelangelo approached that giant block of Carrara marble and said, “I saw the angel in the marble and carved it until I set him free,” analysts are approaching data with the same visionary and inquisitive mind.

In today’s age, where big data reigns, the art of data analysis is making sense of our world.

Analysts are chiseling the bulk of raw data to create meaning—patterns and associations, maps and models— to help us draw insights, understand trends, and even make decisions from the stories the data tell.

Data visualization is the graphical display of abstract information for two purposes: sense-making (also called data analysis) and communication. Important stories live in our data and data visualization is a powerful means to discover and understand these stories, and then to present them to others.

But in presenting such complex information, data analysis is not easily computable to the human brain until it is presented in a data visualization.

Tables, charts, and graphs provide powerful representations of numerous data points so that the insights and trends are easily understood by the human brain.

That’s why data visualization is one of the most persuasive techniques to evangelize experimentation today—particularly in an era of ever-decreasing attention spans.

On a slide. On a dashboard in Google Data Studio. Or simply something you plan to sketch on a whiteboard. This presentation of the data will decide if your trends and insights are understood, accepted and inferences drawn as to what action should be taken.

A thoughtfully crafted visualization conveys an abundance of complex information using relatively little space and by leveraging our visual system—whether that’s the optimal number of lead generation form fields or the potential ROI of your program throughout the quarter.

In this post, we dig into the practice of designing data visualizations for your audience. You will learn:

  • How your data visualizations can enhance the Executive decision-making process, using the guidelines of the Cynefin Framework
  • Why data visualizations are the most powerful way for the human brain to compute complex information through dual processing theory
  • What makes data visualizations effective using the five qualities defined by expert Alberto Cairo
  • And a real-world example of how you can problem-solve a problem to result in the most effective data visualization for your audience.

The Brain (Or, why we need data visualization)

You may be familiar with System 1 and System 2 thinking, known as dual processing theory. System 1 (or Type 1) is the predominant fast, instinctual decision-making and System 2 (Type 2) is the slow, rational decision-making.

DualProcessTheory

Dual Process Theory

Dual Process Theory categorizes human thinking into two types or systems.

Share the insight:

We often relegate System 1 thinking to your audience’s emotions. (We talked about it in “Evangelizing experimentation: A strategy for scaling your test and learn culture” or in “I feel, therefore I buy: How your users make buying decisions.”)

But that immediate grasp over complex information in a data visualization is also related to System 1 thinking.

A large part of our brain is dedicated to visual processing. It’s instinctual. It’s immediate.

If you have a strong data visualization, every sighted person can understand the information at hand. A seemingly simple 5×5 chart can provide a snapshot of thousands of data points.

In other words, visualizing data with preattentive features in mind is akin to designing ergonomic objects: You know that a sofa is made for sitting. You know that a handle on a coffee mug is designed for your hand. (This is called preattentive processing.)

Preattentive processing occurs before conscious attention. Preattentive features are processed very quickly…within around 10 milliseconds.

When creating data visualizations, you are designing for human physiology. Any other method of translating that information is a disservice to your message and your audience.

When we consider the speed of which people understand the multiple data points in a problem through dual processing theory and preattentive processing, it’s almost foolish not to take advantage of data visualization.

When you design data visualizations, you are understanding your audience.

Understanding how Executives make decisions

A data visualization is a display of data designed to enable analysis, exploration, and discovery. Data visualizations aren’t intended mainly to convey messages that are predefined by their designers. Instead they are often conceived as tools that let people extract their own conclusions from the data.

Data analysis allows Executives to weigh the alternatives of different outcomes of their decisions.

And data visualizations can be the most powerful tool in your arsenal, because your audience can see thousands of data points on a simple chart.

Your data visualization allows your audience to gauge (in seconds!) a more complete picture so they can make sense of the story the data tell.

In Jeanne Moore’s article “Data Visualization in Support of Executive Decision Making,” the author explored the nature of strategic decision making through the Cynefin framework.

The Cynefin Framework

The Cynefin Framework aids Executives in determining how to best respond to situations by categorizing them in five domains: Simple, Complicated, Complex, Chaotic and Disorder. Source: HBR’s A Leader’s Framework for Decision Making

Share the insight:

The Cynefin Framework

The Cynefin Framework (pronounced ku-nev-in) allows business leaders to categorize issues into five domains, based on the ability to predetermine the cause and effect of their decisions.

Created by David Snowden in 1999 when he worked for IBM Global Services, the Cynefin framework has since informed leadership decision making at countless organizations.

The five domains of the Cynefin Framework are:

  • In the Simple Domain, there is a clear cause and effect. The results of the decision are easy to predict and can be based on processes, best practices, or historical knowledge. Leaders must sense, categorize and respond to issues.
  • In the Complicated Domain, multiple answers exist. Though there is a relationship between cause and effect, it may not be clear at first (think known unknowns). Experts sense the situation, analyze it and respond to the situation.
  • In the Complex Domain, decisions can be clarified by emerging patterns. That’s because issues in this domain are susceptible to the unknown unknowns of the business landscape. Leaders must act, sense and respond.
  • In the Chaotic Domain, leaders must act to establish order to a chaotic situation (an organizational crisis!), and the further gauge where stability exists and doesn’t exist to get a handle on the situation and move it into the complex or complicated domain.
  • And in the Disorder Domain, the situation cannot be categorized in any of the four domains. It is utterly an unknown territory. Leaders can analyze the situation and categorize different parts of the problem into the other four domains.

In organizations, decision making is often related to the Complex Domain because business leaders are challenged to act in situations that are seemingly unclear or even unpredictable.

Leaders who try to impose order in a complex context will fail, but those who set the stage, step back a bit, allow patterns to emerge, and determine which ones are desirable will succeed. They will discern opportunities for innovation, creativity, and new business models.

David J. Snowden and Mary E. Boone

Poor quarterly results, management shifts, and even a merger—these Complex Domain scenarios are unpredictable, with several methods of responding, according to David J. Snowden and Mary E. Boone.

In other words, Executives need to test and learn to gather data on how to best proceed.

Leaders who don’t recognize that a complex domain requires a more experimental mode of management may become impatient when they don’t seem to be achieving the results they were aiming for. They may also find it difficult to tolerate failure, which is an essential aspect of experimental understanding,” explains David J. Snowden and Mary E. Boone.

Probing and sensing the scenario to determine a course of action can be assisted by data analyst to understand collaboratively the historical and current information at hand—in order to guide the next course of action.

An organization should take little interest in evaluating — and even less in justifying — past decisions. The totality of its interest should rest with how its data can inform its understanding of what is likely to happen in the future.

Of course, there is always the threat of oversimplifying issues, treating scenarios like they have easy answers.

But even with situations in the other domains of the Cynefin Framework, data visualization can provide insight into next steps—if they meet certain criteria.

What makes an effective data visualization

The presenter of the visualization must also provide a guiding force to assist the executive in reaching a final decision, but not actually formulate the decision for the executive.

With data visualization, there will always be insightful examples and examples that clearly missed the mark.

Avinash Kaushik, in his Occam Razor’s article, “Closing Data’s Last-Mile Gap: Visualizing For Impact!” called the ability for data visualizations to influence the Executive’s decision-making process closing the “last-mile” gap.

It can take an incredible effort to gather, sort, analyze and glean insights and trends from your data. If your analysis is solid, if your insights and trends are enlightening, you don’t want to muddle your audience with a confusing data visualization.

Remember: a data visualization is only as impactful as its design is on your audience.

In terms of the value in data visualization, it must provide simplicity, clarity, intuitiveness, insightfulness, gap, pattern and trending capability in a collaboration enabling manner, supporting the requirements and decision objectives of the executive.

Alberto Cairo’s Five Qualities of Great Data Visualizations

Alberto Cairo, author of “The Truthful Art: Data, Charts, and Maps for Communication,” outlines five qualities of great data visualizations. Your data visualization should be:

  1. Truthful: It should be based on thorough and objective research—just as a journalist is expected to represent the truth to the best of their abilities, so too is the data analyst.
  2. Functional: It should be accurate and allow your audience to act upon your information. For instance, they can perceive the incremental gains of your experimentation program over time in a sloping trendline.
  3. Beautiful: It needs to be well-designed. It needs to draw in your audience’s attention through an aesthetically pleasing display of information.
  4. Insightful: It needs to provide evidence that would be difficult to see otherwise. Trends, insights, and inferences must be drawn by the audience, in collaboration with the data analyst.
  5. Enlightening: It needs to illuminate your evidence. It needs to enlighten your audience with your information in a way that is easy to understand.

When you nail down all five of these criteria, your data visualization can shift your audience’s ways of thinking.

It can lead to those moments of clarity on what action to take next.

So, how are these design decisions made in data visualization?

Here’s an example.

Free questionnaire

Improve your data visualization’s impact!

Know where and how to improve your data visualization’s ability to tell a story to your audience.

How we make decisions about data visualization: An example in process

A note on framing: While the chart and data discussed below are real, the framing is artificial to protect confidentiality. The premise of this analysis is that we can generate better experiment ideas and prioritize future experiments by effectively communicating the insights available in the data.

Lead generation forms.

You probably come across these all the time in your web searches. Some forms have multiple fields and others have few—maybe enough for your name and email.

Suppose you manage thousands of sites, each with their own lead generation form—some long and some short. And you want to determine how many of fields you should require from your prospects.

If you require too many form fields, you’ll lose conversions; too few, and you’ll lose information to qualify those prospects.

It’s a tricky situation to balance.

Like all fun data challenges, it’s best to pare the problem down into smaller, manageable questions. In this case, the first question you should explore is the relationship between the number of required fields and the conversion rate. The question is:

How do conversion rates change when we vary the number of required fields?

Unlike lead quality—which can be harder to measure and is appraised much further down the funnel—analyzing the relationship between the number of required fields and the number of submissions is relatively straightforward with the right data in hand. (Cajoling the analytics suite to provide that data can be an interesting exercise in itself—some will not do so willingly.)

So, you query your analytics suite, and (assuming all goes well), you get back this summary table:

WiderFunnel Data Visualization Examples
On this table, how immediately do you register the differences between the average conversion rates? Note how you process the information—it’s System 2 thinking.

What’s the most effective way to convey the message in this data?

Most of you probably glossed over the table, and truth be told, I don’t blame you—it’s borderline rude to expect anyone to try to make sense of these many variables and numbers.

However, if you spend half a minute or so analyzing the table, you will make sense of what’s going on.

In this table format, you are processing the information using System 2 thinking—the cognitive way of understanding the data at hand.

On the other hand, note how immediate your understanding with a simple data visualization…

The bar graph

WiderFunnel Data Visualization Examples Bar Graph
Compared to the table above, the decrease in conversion rate between one and four required fields is immediately obvious, as is the upward trend after four. Your quick processing of these differences is System 1 thinking.

In terms of grasping the relationship in the data, it was pretty effective for a rough-and-ready chart.

In less than a second, you were able to see that conversion rates go down as you increase the number of required fields—but only until you hit four required fields. At this point, average conversion rates (intriguingly!) start to increase.

But you can do better…

For a good data visualization, you want to gracefully straddle the line between complexity and understanding:

How can we add layers of information and aesthetics that enrich the data visualization, without compromising understanding?

No matter how clever the choice of the information, and no matter how technologically impressive the encoding, a visualization fails if the decoding fails.

Adding layers of information can’t be at the expense of your message—rather, it has to be in service of that message and your audience. So, when you add anything to the chart above, the key question to keep in mind is:

Will this support or undermine making informed business decisions?

In this case, you can have some fun by going through a few iterations of the chart, to see if any visualization works better than the bar chart.

The dot plot

Compared to a bar chart, a dot plot encodes the same information, while using fewer pixels (which lowers visual load), and unshackles you from a y-axis starting at zero (which is sometimes controversial, according to this Junk Charts article and this Stephanie Evergreen article).

In the context of digital experimentation, not starting the y-axis at zero generally makes sense because even small differences between conversion rates often translate into significant business impact (depending on number of visitors, the monetary / lifetime value of each conversion, etc.).

In other words, you should design your visualization to make apparent small differences in conversion rates because these differences are meaningful—in this sense, you’re using the visualization like researchers use a microscope.

If you are still not convinced, an even better idea (especially for an internal presentation) would be to map conversion rate differences to revenue—in that case, these small differences would be amplified by your site’s traffic and conversion goal’s monetary value, which would make trends easier to spot even if you start at 0.

Either way, as long as the dots are distant enough, large enough to stand out but small enough to not overlap along any axis, reading the chart isn’t significantly affected.

WiderFunnel Data Visualization Examples Dot Plot
Compared to the bar chart, the dot plot lowers the visual load, gives us flexibility with our y-axis (it does not start at 0), allowing us to emphasize the trend.

More importantly (spoiler alert!), our newly-found real estate (after changing from bars to dots) allows you to add layers of information without cluttering the data visualization.

One such layer is the data’s density (or distribution), represented by a density plot.

A density plot

A density plot uses the height of the curve to show roughly how many data points (what percentage of sites) require how many fields. In this case, the density plot adds the third column (“Percent of Sites”) from the table you saw earlier.

That makes it easy to see (once you understand how density plots work) how much stock to place in those averages.

For example, an average that is calculated on a small number of sites (say, less than 1% of the available data) is not as important or informative as an average that represents a greater number of sites.

So, if an average was calculated based on a mere ten sites, we would be more wary of drawing any inferences pertaining to that average.

WiderFunnel Data Visualization Example Plot Graph Density Plot
After adding the density plot, you can see that most sites require two fields, roughly the same require one and three, and after eight required fields, the distribution is pretty much flat—meaning that we don’t have many data points. So, those incredibly high conversion rates (relative to the rest) are fairly noisy and unrepresentative—something we’ll verify with confidence intervals later on.

Visualizing uncertainty and confidence intervals

When we add the density plot, we see that most of our data comes from sites that require between one and four fields (80%, if you added the percentages in the table), the next big chunk (19%) come from sites that require five to nine fields, and the remaining 1% (represented by the flat sections of the density curve) require more than nine. (The 80/20 rule strikes again!)

Another useful layer of information is the confidence interval for these averages. Given the underlying data (and how few data points go into some averages), how can we represent our confidence (or uncertainty) surrounding each average?

Explaining Confidence Intervals

If you’ve never encountered confidence intervals before, here’s a quick example to explain the intuition behind them…

Let’s say you’re taking a friend camping for three days, and you want to give them enough information so they can pack appropriately.

You check the forecast and see lows of 70°F, highs of 73°F, and an average of 72°F.

So, when you tell your friend “it’s going to be about 72°F“—you’re fairly confident that you’ve given them enough information to enjoy the trip (in terms of packing and preparing for the weather, of course).

On the other hand, suppose you’re camping in a desert that’s expecting lows of 43 °F, highs of 100°F, and (uh oh) an average of 72°F.

Assuming you want this person to travel with you again, you probably wouldn’t say, “it’s going to be about 72°F.” The information you provided would not support them in making an informed decision about what to bring.

That’s the idea behind confidence intervals: they represent uncertainty surrounding the average, given the range of the data, thereby supporting better decisions.

Visually, confidence intervals are represented as lines (error bars) that extend from the point estimate to the upper and lower bounds of our estimate: the longer the lines, the wider our interval, the more variability around the average.

When the data are spread out, confidence intervals are wider, and our point estimate is less representative of the individual points.

Conversely, when the data are closer together, confidence intervals are narrower, and the point estimate is more representative of the individual points.

WiderFunnel Data Visualization Examples Dot Plot Confidence Intervals
Once you add error bars, you can see that many of those enticingly high conversion rates are muddled by uncertainty: at twelve required fields, the conversion rate ranges from less than 10% to more than 17%! Though less extreme, a similar concern holds for data points at ten and eleven required fields. What’s happening at thirteen, though?

At this point, there are two things to note: first, when you look at this chart, your attention will most likely be drawn to the points with the widest confidence intervals.

That is, the noisiest estimates (the ones with fewer data points and / or more variability) take up the most real estate and command the most attention.

Obviously, this is not ideal—you want to draw attention to the more robust and informative estimates: those with lots of data and narrower intervals.

Second, the absence of a confidence interval around thirteen required fields means that either there’s only one data point (which is likely the case, given the density curve we saw earlier), or all the points have the same average conversion rate (not very likely).

Luckily, both issues have the same solution: cut them out.

How to best handle outliers is a lively topic—especially since removing outliers can be abused to contort the data to fit our desired outcomes. In this case, however, there are several good reasons to do so.

The first two reasons have already been mentioned—these outliers come from less than 1% of our entire data set: so, despite removing them, we are still representing 99% of our data.

Second, they are not very reliable or representative, as evidenced by the density curve and the error bars.

Finally, and more importantly—we are not distorting the pattern in the data: we’re still showing the unexpected increase in the average conversion rate beyond four required fields.

We are doing so, however, using the more reliable data points, without giving undue attention to the lower quality ones.

Lastly, to visualize and quantify our answer to the question that sparked the whole analysis (how do conversion rates change when we vary the number of required fields?), we can add two simple linear regressions: the first going from one to four required fields, the second from four to nine required fields.

Why two, instead of the usual one?

Because we saw from the density chart discussion that 80% of our data comes from sites requiring one to four fields, a subset that shows a strong downward trend.

Given the strength of that trend, and that it spans the bulk of our data, it’s worth quantifying and understanding, rather than diluting it with the upward trend from the other 20%.

That remaining 20%, then, warrants a deeper analysis: what’s going on there—why are conversion rates increasing?

The answer to that will not be covered in this article, but here’s something to consider: could there be qualitative differences between sites, beyond four required fields? Either way, the regression lines make the trends in the data clearer to spot.

WiderFunnel Data Visualization Examples Dot Plot Regression Line
The regression lines draw attention to the core trend in the data, while also yielding a rough estimate of how much conversion rates decrease with the increase in required fields.

After adding the regression line, you summarize the main take-away with a nice, succinct subtitle:

Increasing the number of Required Fields from one to four decreases average conversion rate by 1.2% per additional field, for 80% of sites.

This caption helps orient anyone looking at the chart for the first time—especially since we’ve added several elements to provide more context.

Note how the one statement spans the three main layers of information we’ve visualized:

  1. The average conversion rate (as point estimates)
  2. The distribution of the data (the density curve)
  3. The observed trend

Thus, we’ve taken a solid first pass at answering the question:

How do conversion rates change when we vary the number of required fields?

Does this mean that all sites in that 80% will lose ~1% conversion rate for every required field after the first?

Of course not.

As mentioned in the opening section, this is the simplest question that’ll provide some insight into the problem at hand. The lowest-hanging fruit, if you will.

However, it is far from a complete answer.

You’ve gently bumped into the natural limitation of bivariate analyses (an analysis with only two variables involved): you’re only looking at the change in conversion rate through the lens of the number of required fields, when there are obviously more variables at play (the type of site, the client base, etc.).

Before making any business decisions, you would need a deeper dive into those other variables, (ideally!) incorporate lead quality metrics, to have a better understanding of how the number of required fields impacts total revenue.

And this is where you come back full circle to experimentation: you can use this initial incursion to start formulating and prioritizing better experiment ideas.

For example, a successful experimentation strategy in this context would have to, first, better understand the two groups of sites discussed earlier: those in the 80% and those in the other 20%.

Additionally, more specific tests (i.e., those targeting sub-domains) would have to consider whether a site belongs to the first group (where conversion rates decrease as the number of required fields increase) or the second group (where the inverse happens)—and why.

Then, we can look at which variables might explain this difference, and what values these variables take for that site.

For example, are sites in the first group B2C or B2B? Do they sell more or less expensive goods? Do they serve different or overlapping geographic regions?

In short, you’ve used data visualization to illuminate a crucial relationship to stakeholders, and to identify knowledge gaps when considering customer behaviour across a range of sites.

Addressing these gaps would yield even more valuable insights in the iterative process of data analysis.

And these insights, in turn, can guide the experimentation process and improve business outcomes.

Your audience needs to trust your data visualization—and you.

When your experimentation team and Executives can get into the boardroom together, it’s disruptive to your business. It shakes your organization from the status quo, because it introduces new ways of making decisions.

Data-driven decisions are proven to be more effective.

In fact, The Sloan School of Business surveyed 179 large publicly traded firms and found that those that used data to inform their decisions increased productivity and output by 5-6%.

And data analysts have the power to make decision-making among Executive teams more informed.

Relying not on the Executive’s ability to rationalize through the five domains of the Cynefin Framework, data visualization presents the true power of experimentation. And the ability for experimentation to solve real business problems.

But like any working dynamic, you need to foster trust—especially when you are communicating the insights and trends of data. You need to appear objective and informed.

You need to guide your audience through the avenues of action that are made clear by your analysis.

Of course, you can do this through speech. But you can also do this through the design of your data visualizations.

WiderFunnel Data Visualization Examples Dashboard
Data visualizations help your Executive team keep a pulse on what is happening in your experimentation program and allow them to understand how it can impact internal decision making.

Whether you are presenting them in a dashboard where your team can keep a pulse on what’s happening with your experimentation program, or if it’s a simple bar graph or dot plot in your slide deck, your data visualizations matter.

Clear labeling and captions, graphic elements that showcase your data dimensions, lowering visual load, and even using color to distinguish elements in your data visualization—these help your audience see what possibilities exist.

They help your audience identify patterns and associations—and even ask questions that can be further validated through experimentation.

Because experimentation takes the guesswork out of decision making. Your data visualizations make it easier for the Executive to navigate the complexity of situations they are challenged today.

And that is, ultimately, the most persuasive way to evangelize experimentation at your organization.

How impactful have you found strong data visualizations on your team’s decision-making process? We’d love to hear about it.

Author

Wilfredo Contreras

Senior Data Analyst

Contributors

Lindsay Kwan

Marketing Communications Specialist

Benchmark your experimentation maturity with our new 7-minute maturity assessment and get proven strategies to develop an insight-driving growth machine.

Get started

Source link

Confirm the integrity of your data

Confirm the integrity of your data

Never before has there been a greater need for a reliable, holistic marketing measurement tool. In a world of fractured media and consumer interest, intense competitive pressure, and lightening-speed product innovation, the sheer volume of data that must be analyzed and the decisions that must be made demand a more evolved approach to attribution and decision making. This need for speed has brought into bright focus a mandate for reliable, consistent and valid data, and the potential for challenges when there are errors.

The attribution category has been evolving quickly over the past decade, and there are myriad options from which marketers can choose. Recent research conducted by Forrester suggests that leading marketers are adopting the newest and most advanced approach: Unified Measurement or Total Marketing Measurement models. This analysis combines the attributes of person-level measurement with the ability to measure traditional channels such as TV. Marketers who upgrade to and invest in novel solutions – financially and organizationally – can find a competitive advantage from smarter attribution.

The greatest of these instruments answer problems such as the optimal frequency and reach in and between channels and determine which messages and creative are best for which audiences. New advances in these products are providing even more granular insights concerning message sequencing, and next-best message decisioning based on specific audiences and multiple stages of their buying processes. The best of these solutions incorporate external and environmental circumstances such as weather, travel patterns and more. Furthermore, capabilities of today’s solutions produce insights in such a timely fashion that agile marketers can include those insights into active campaigns to drive massive performance gains, rather than waiting for weeks or months to see returns.

However while these attribution models have evolved a long way in recent years, there is one challenge that all must tackle: the need for reliable, consistent and valid data. Even the most advanced and powerful of these systems are dependent on the quality of the information they ingest. Incorrect or sub-par input will always produce the wrong outputs. Data quality and reliability have become a primary focus of marketing teams and the forward-thinking CMOs who lead them.

If the data are not accurate, it doesn’t matter what statistical methods or algorithms we apply, nor how much experience we have in interpreting data. If we start with imperfect data, we’ll end up with erroneous results. Basing decisions on a conclusion derived from flawed data can have costly consequences for marketers and their companies. Inaccurate data may inflate or give undue credit to a specific tactic. For example, a model may indicate that based on a media buy a television advertisement –usually one of the most expensive of our marketing efforts – was responsible for driving an increase in visitors to our website. But, if this ad failed to air, and there is inaccurate data in a media log, the team may wrongly reallocate budget to their television buy. This would be a costly mistake.

In fact, inaccurate data may be one of the leading causes of waste in advertising. These inaccuracies have become an epidemic that negatively impacts both advertisers and the consumers they are trying to reach. Google recently found that, due in large part to bad data, more than 56 percent of ad impressions never actually reach consumers, and Proxima estimates $37 billion of worldwide marketing budgets go to waste on poor digital performance. And that’s just digital. The loss for major players who market online and offline can be extensive, and it’s calling for a revolutionary new approach to data quality and reliability.

So, how accurate is your data? Do you know if there are gaps? Are there inconsistencies that may queer your results? Many of us put a great deal of trust in our data systems leaving us forgetting to ask these critical questions. You can’t just assume you have accurate data – now more than ever you must know you do. That may require some work up front, but the time you invest in ensuring accurate data will pay off in better decisions and other significant improvements. Putting in place, from the start and early in the process, steps and checks to ensure the timely and accurate reporting of data is key to avoiding costly mistakes down the road. Solving these problems early in your attribution efforts helps build confidence in the optimization decisions you’re making to drive higher return on investment and, perhaps more importantly, will help teams avoid taking costly missteps.

When it comes to attribution, it is especially critical to make sure the system you are relying on has a process for analyzing and ensuring that the data coming in is accurate.

Below are four key considerations, when working with your internal analytics staff, agencies, marketing team and attribution vendor, you can use to unlock more positive data input and validation to ensure accurate conclusions.

1. Develop a data delivery timetable

The entire team should have a clear understanding of when data will be available and, more importantly, by what date and or time every data set will arrive. Missing or unreported data may be the single most significant threat to drawing accurate conclusions. Like an assembly line, if data fails to show up on time, it will stop production for the entire factory. Fortunately, this may also be one of the easiest of the challenges to overcome. Step one is to conduct an audit of all the information you are currently using to make decisions. Map the agreed upon or expected delivery date for every source. If you receive a weekly feed of website visitors, on what day does it typically arrive? If your media agency sends a monthly reconciliation of ad spend and impressions, what is the deadline for its delivery?

Share these sources of information and the schedule of delivery with your attribution vendor. The vendor, in turn, should develop a dashboard and tiered system of response for data flow and reporting. For example, if data is flowing as expected, the dashboard may include a green light to indicate all is well. If the information is a little late, even just past the scheduled date but within a predefined window of time, the system should generate a reminder to the data provider or member of the team who is responsible for the data letting them know that there may be a problem. However, if data is still missing past a certain point, threatening the system’s ability to generate optimizations, for example, an alert should be sent to let the team know that action is needed.

2. Create standard templates for routinely reported data

You, members of your team, and your attribution partner need a clear understanding of what specific data is included in which report and in what formats. It would be a shame to go through the hard work of making sure your information is arriving on time only to find out that the data is incomplete or reported inconsistently. To use the assembly line analogy again, what good is it to make sure a part arrives on time if it’s the wrong part that’s delivered?

Like quality control or a modern-day retinal scan, the system should check to see if the report matches expected parameters. Do the record counts match the number of records you expected to receive? If data from May was expected, do the dates make sense? And, is all the information that should be in the report included? Are there missing data?

With this system in place, a well-configured attribution solution or analytics tool should be able to test incoming data for both its completeness and compliance with expected norms. If there are significant gaps in the data or if data deviates overmuch from an acceptable standard, the system can again automatically alert the team that there may be a problem.

3. Use previous data from the source to confirm new data

Your attribution provider should be able to use data previously reported from a source to help identify any errors or gaps in the system. For example, you can include in your data feed multiple weeks or months of previously reported data. This feed will produce one new set of data and three previous sets of overlapping data. If overlapping data does not match that will trigger an alert.

Now you’ll want to determine if the data makes sense. You want to see if new data is rational and consistent with that which was previously reported. This check is a crucial step in using previously published data to confirm the logic of more recent data reported.

Here, too, you can check for trends over time to see if data is consistent or if there are outliers. Depending on the specific types of media or performance being measured a set of particular logic tests should be developed. For example, is the price of media purchased within the range of what is typically paid? Is the reach and frequency per dollar of the media what was expected?

Leading providers of marketing attribution solutions are continually performing these checks to ensure data accuracy and consistent decision making. With these checks in place, the marketing attribution partner can diagnose any problems, and the team can act together to fix it. This technique has the added benefit of continuously updating information to make sure errors, or suspicious data, don’t linger to confound ultimate conclusions.

One note here that should be taken into account: outliers are not necessarily pieces of bad data. Consider outliers as pieces of information that have not yet been confirmed or refuted. It is a best practice to investigate outliers to understand their source, or hold them in your system to see if they’re not the beginnings of a new trend.

4. The benefit of getting information from multiple sources

Finally, there are tangible benefits to confirming data from multiple data sets. For example, does the information about a customer contained in your CRM conform with the information you may be getting from a source like Experian? Does data you’re receiving about media buys and air dates match the information you may be receiving from Sigma encoded monitoring?

Even companies that are analytics early adopters find themselves challenged to ensure the data upon which they rely is consistent, reliable and accurate. Marketers understand that they have to be gurus of data-driven decision making, but they can’t just blindly accept the data they are given.

Remember, as we have mentioned, despite the potential benefits of a modern attribution solution, erroneous data ensures their undoing. To be certain your process is working precisely, create a clear understanding of the data and work with a partner who can build an early warning system for any issues that arise. Ultimately, this upfront work ensures more accurate analysis and will help achieve the goal of improving your company’s marketing ROI.

As a very first step, since data may come from multiple departments inside the company and various agencies that support the team, develop a cross-functional steering committee consisting of representatives from analytics, marketing, finance, as well as digital and traditional media agencies; the steering committee should have a member of the team responsible for overall quality and flow. As a team, work together to set benchmarks for quality and meet regularly to discuss areas for improvement.

In this atmosphere of fragmented media and consumer (in)attentiveness, those who rely on data-driven decision-making will gain a real competitive advantage in the marketplace. Capacities of today’s solutions produce insights in such a timely fashion that the nimblest marketers can incorporate those insights into active campaigns to drive massive performance improvements, rather than waiting for weeks or months to see results. But the Achilles heel of any measurement system is the data upon which it relies on generating insight. All other things being equal, the better the data going in, the better the optimization recommendations coming out.


Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.


About The Author

Rex Briggs is Founder and CEO of Marketing Evolution and has more than 20 years of experience in research and analytics. Rex focuses on omni-channel personal level marketing attribution and optimization. He served on the review board of JAR, and serves on Research World’s editorial board. He is the best-selling author of two books, “What Sticks, Why Advertising Fails and How to Guarantee Yours Succeeds” (2006) and “SIRFs Up, The Story of How Algorithms And Software Are Changing Marketing.”

Source link

4 simple ways small businesses can use data to build better customer relationships

4 simple ways small businesses can use data to build better customer relationships

In a world where customers are bombarded across every possible channel with brand messages, targeting is more important than ever before. Small businesses need to be able to make their campaigns feel relevant and personal in order to keep up, but the processes involved – collecting, organizing and interpreting customer data to make it actionable – are often intimidating to small businesses and solo entrepreneurs with limited time and resources.

Collecting, organizing and learning from your customer data is critical no matter how large your team is or what stage of growth you’re in. In fact, there’s no better time to consider your processes for data than when you’re just starting out. And getting started with basic strategies for building customer relationships doesn’t have to be difficult – there are some simple steps you can take to save yourself a lot of time as your business grows and scales.

From the moment you start your business and establish an online presence, you should be laying the groundwork for effective CRM strategies. This includes: establishing a single-source of truth for your customer data, being thoughtful and organized about how you collect information and setting up the right processes to interpret that data and put it to work for your marketing. Here are some actionable steps (with examples) to take now:

  • Collect: Make sure you’re set up to onboard people who want to be marketed to. Whether you’re interacting online or in person, you should be collecting as many insights as possible (for example, adding a pop-up form to your website to capture visitors, or asking people about their specific interests when they sign up for your email list in store) and consolidating them so you can use them to market.
  • Organize: Once you have this data, make sure you’re organizing it in a way that will give you a complete picture of your customer, and make it easy to access the insights that are most important for your business to know. Creating a system where you can easily sort your contacts based on shared traits – such as geography, purchase behaviors or engagement levels – will make it much easier to target the right people with the right message.
  • Find insights: Find patterns in data that can spark new ideas for your marketing. For example, the realization that your most actively engaged customers are in the Pacific Northwest could lead to a themed campaign targeting this audience, a plan for a pop-up shop in that location or even just help you plan your email sends based on that time zone.
  • Take action: Turn insights into action, and automate to save time. As you learn more about your audience and what works for engaging them, make sure you’re making these insights scalable by setting up automations to trigger personalized messages based on different demographic or behavioral data.

Doing this right won’t just result in more personalized marketing campaigns and stronger, more loyal customer relationships – it will also help you be smart about where you focus your budget and resources as you continue to grow.


Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.


About The Author

As VP of Marketing, Darcy Kurtz leads Mailchimp’s product marketing team. Her team aligns product strategy with marketing execution to make Mailchimp’s sophisticated marketing technology accessible for small businesses worldwide. Darcy joined Mailchimp with more than 25 years of experience leading global marketing at companies like Dell, Sage and Outsystems. She has a career-long passion for serving small businesses.

Source link