The SEO metrics that really matter for your business Search Engine Watch

The SEO metrics that really matter for your business

The SEO metrics that really matter for your business

Whether you are a business owner, marketing manager or simply just interested in the world of ecommerce, you may be familiar with how a business can approach SEO.

To every person involved, the perception of SEO and its success can vary from a sophisticated technical grasp to a knowledge of the essentials.

At all levels, measurement and understanding of search data are crucial and different metrics will stand out; from rankings to the finer details of goals and page speed.

As you may know, you can’t rely solely on ranks as a method to track your progress. But there are other, simple ways to measure the impact of SEO on a business.

In a recent AMA on Reddit, Google’s own Gary Illyes recently urged SEO professionals to stick to the basics and this way of thinking can be applied to the measurement of organic search performance.

In this article, we will look to understand the best metrics for your business when it comes to understanding the impact of SEO, and how they can be viewed from a technical and commercial perspective. Before we start, it’s worth mentioning that this article has used Google’s own demo analytics account for screenshots. If you need further info to get to grips, check out this article, or access the demo version of Google Analytics.

Each of these are commercial SEO metrics — data that means something to everyone in a business.

Organic traffic

This is undoubtedly a simple, if not the most simple way of understanding the return of any SEO efforts. The day-to-day traffic from search engines is the key measure for many marketers and any increase can often be tied to an improved level of search visibility (excluding seasonal variation).

In a world where data drives decisions, these figures are pretty important and represent a key part of any internet user’s session, whether that is to get an answer, make a purchase or something else.

In Google Analytics, simply head follow this path: Acquisition -> All Traffic -> Channels to see the organic traffic received within your chosen time period

Identifying traffic sources in Google Analytics

You might be asking, “how can I know more?”

Google might have restricted access to keyword data back in 2011, but you can still dig down into your traffic from organic search to look at landing pages and locations.

Organic traffic data – Filtered by landing page 

Not all traffic from search hits your homepage, some users head to your blog or to specific landing pages, depending on their needs. For some searches, however, like those for your company name, your homepage will be the most likely option.

To understand the split of traffic across your site, use the “Landing Page” primary dimension and explore the new data, split by specific page URL.

Understanding the traffic split using Google Analytics

Organic traffic data – Filtered by location

Within the same section, the organic search data can be split by location, such as city, to give even further detail on the makeup of your search traffic. Depending on how your business operates, the locations shown may be within the same country or across international locations. If you have spent time optimizing for audiences in specific areas, this view will be key to monitor overall performance.

Screenshot of search data filtered by city

Screenshot of the city wise breakdown of the search traffic in Google Analytics

Revenue, conversions, and goals

In most cases, your website is likely to be set up to draw conversions, whether that is product sales, document downloads, or leads.

Part of understanding the success of SEO, is the contribution to the goal of a business, whether that is monetary or lead-based.

For revenue based data, head to the conversions section within Google analytics, then select the product performance. Within that section, filter the secondary dimension by source/medium to show just sales that originate from search engine traffic.

Screenshot of the product performance list to track search originated sales

If your aim isn’t totally revenue based, perhaps a signup form or some downloadable content, then custom analytics goals are your way of fully understanding the actions of visitors that originate from search engines.

Within the conversions section, the source of your goal completions can be split by source, allowing you to focus on solely visits from organic search.

Graph on source wise split of goal conversions

If a visitor finds your site from a search and then buys something or registers their details, it really suggests you are visible to the right audience.

However, if you are getting consistent organic search visits with no further actions taken, that suggests the key terms you rank for, aren’t totally relevant to your website.

SEO efforts should focus on reaching the relevant audiences, you might rank #1 for a search query like “cat food” but if you only sell dog products, your optimization hasn’t quite worked.

Search and local visibility

In the case that your business has web and/or physical store presences, you can use the tools within Google My Business to look further into and beyond the performance of the traditional blue links.
Specifically, you can understand the following:

  • How customers search for your business
  • How someone sees your business
  • What specific actions they take

The better your optimization, the more of these actions you will see, check these out!

Doughnut graph of search volume seen in Google Analytics

Graph of customer actions

Graph of listing sources for Google my business

Average search rankings

Rankings for your key terms on search engines have traditionally been an easy way to quickly get a view of overall performance. However, a “quick Google” can be hard to draw conclusions from. Personalized search from your history and location essentially skews average rank to a point where its use has been diminished.

A variety of tools can be used to get a handle on average rankings for specific terms. The free way to do this is through Google Search Console with freemium tools like SEMRush and Ahrefs, which also offer an ability to understand average rank distribution.

With search rankings becoming harder to accurately track, the measure of averages is the best way to understand how search ranking relates to and impacts the wider business.

Graph on average positioning of the website in search

Technical metrics – Important but not everyone pays attention to these

When it comes to the more technical side of measuring SEO, you have to peel back the layers and look beyond clicks and traffic. They help complete the wider picture of SEO performance, plus they can help uncover additional opportunities for progress.

Search index – Through search consoles and other tools

Ensuring that an accurate index of your website exists is one thing that you need to do with SEO. Because if only a part of your site or the wrong pages are indexed, then your overall performance will suffer.

Although a small part of overall SEO work, its arguably one of the most crucial.

One quick way is to enter the command “site:” followed by the URL of your site’s homepage, to see the total number of pages that exist in a search engine’s index.

To inspect the status of a specific page on Google, the Google Search Console is your best option. The newest version of the search console provides a quick way to bring up results.

Screenshot of the latest Google Search Console

Search crawl errors

As well as looking at what has been indexed, any website owner needs to keep an eye out for what may be missing, or if there have been any crawl errors reported by Google. These often occur because a page has been blocked, or the format isn’t crawlable by Google.

Head to the “Coverage” tab within Google Search Console to understand the nature of any errors and what page the error relates to. If there’s a big zero, then you and your business naturally have nothing to worry about.

Screenshot of viewing error reports in Google Search Console

Click-through rate (CTR) and bounce rate

In addition to where and how your website ranks for searches, a metric to consider is how often your site listing is clicked in the SERPs. Essentially, this shows the percentage of impressions that result in a site visit.

This percentage indicates how relevant your listing is to the original query and how well your result ranks compared to your competitors.

If people like what they see and can easily find your website, then you’ll likely get a new site visit.

The Google Search Console is the best go-to resource again for the most accurate data. Just select the performance tab and toggle the CTR tab to browse data by query, landing page, country of origin, and device.

Screenshot of a CTR performance graph on the basis of query, landing page, country of origin, and device

If someone does venture onto your site, you will want to ensure the page they see, is relevant to their search, after all, search algorithms love to reward relevance! If the page doesn’t contain the information required or isn’t user-friendly, then it is likely the user will leave to find a better resource, without taking any action, known as a bounce.

In some cases, one visit may be all that is needed, therefore a bounce isn’t an issue. Make sure to view this metric in the wider context of what your business offers.

Mobile friendliness

Widely reported in 2015, was the unveiling of mobile-friendliness as a ranking factor. This is crucial to the evolution of browser behavior, with mobile traffic, often greater in volume than desktop for some sites.

Another report in the ever useful Google Search Console gives a clear low-down of how mobile-friendly a site is, showing warnings for any issues. It’s worth saying, this measure isn’t an indication of how likely a conversion is, but more the quality of your site on a mobile device.

Graph for tracking the mobile-friendliness of a website

Follow your metrics and listen to the data

As mentioned at the start of this article, data drives decisions. In all areas of business, certain numbers will stand out. With SEO, a full understanding comes from multiple data points, with positives and negatives to be taken at every point of the journey.

Ultimately, it often comes down to traffic, ranks, and conversions, the numbers that definitely drive business are made up of the metrics that don’t often see the light of day but are just as important.

As a digital marketer, it is always a learning experience to know how data drives the evolution of a business and ultimately, how successes and opportunities are reported and understood.

Matthew Ramsay is Digital Marketing Manager at Digitaloft. 

Further reading:

Related reading

Three ideas to create a high-converting product page
SEO writing guide From keyword to content brief

Source link

Cognitive biases: How to get people to prefer your business

Cognitive biases: How to get people to prefer your business

Most of us like to believe that we’re inherently logical people, especially when it comes to purchasing decisions. However, we’re not computers. No matter how logical we try to be, emotion always influences our decisions to some extent.

Psychologists refer to these emotional factors in our decision-making processes as “cognitive biases.” Without even realizing it, we make most of our decisions based on these emotional biases and build our logical arguments for doing or buying something around justifying our emotions.

This is good news for marketers.

If you understand how cognitive biases work and how to use them in your marketing, you can make people feel like you have a better product, service or company. At that point, it almost doesn’t matter whether or not you are better; people will buy from you because that’s what feels right.

With all that in mind, let’s take a look at a couple of powerful cognitive biases you can use to make people view your products or services as superior to the competition.

The mere exposure effect

Back in the 60s, Charles Goetzinger was teaching at OSU and decided to run a social experiment on class. Before the semester started, he contacted one of the students and asked them to wear a giant black bag over their entire body during class – for the whole semester.

Now, putting aside the fact that he was actually able to convince a student to do this, this experiment resulted in some fascinating observations.

As you might expect, the initial response to the black bag was very hostile and derogative. However, as the semester progressed, people’s response to the unexplained bag softened and the student began actually to make friends.

Why? Familiarity makes us comfortable.

As humans, our brains are wired to identify potential threats. A sudden change in the ambient noise could represent an impending attack, an odd taste in your water could mean that it’s contaminated…to our brains, “new” often means “danger.”

However, if we’re around something new for a while and nothing bad happens, our brains relax. The new thing becomes part of our “normal” experience and our brains move on to evaluating other, newer unknowns for potential danger.

How to use the mere exposure effect

Unfortunately, most businesses can’t afford to spend a semester waiting for someone to get comfortable with their brand. Long-term, building familiarity with your customers is a great idea, but often you need them to buy within the next few days-to-weeks.

So, how do you use mere exposure to get people comfortable enough to buy in a reasonable time frame? Here are a few options to consider.

Retarget

When done right, retargeting is all about familiarity. There’s a reason why retargeting campaigns have a better CTR than most display campaigns. Retargeting ads remind people that they are interested in what you’re selling and help them feel comfortable buying from you.

Also, retargeting is a great way to address potential concerns your customers might have during their buyer journey. Remember, on a primal level, “new” is scary because it’s a potential threat. If you can build familiarity while decreasing the perceived threat level of doing business with you, your potential customers will feel more comfortable buying from you than anyone else.

Repurpose

Repetition and re-exposure are a fundamental part of how we learn and become comfortable with new things. This principle is just as important to an adult making a purchasing decision as it is to a kid learning how to walk.

For example, lots of myths, trends and fads get started when something goes viral on social media. People read, hear and see something so many times that it simply becomes a part of their belief system – regardless of whether what they believe is actually true.

In essence, repetition creates reality.

This principle works just as well for honest businesses as it does for fad diets. If you’ve come up with a great marketing message, why not turn it into a blog post, podcast, infographic, video, slide presentation, etc.?

Repurposing your content this way will increase the number of ways and times that people encounter your message. As a result, when they see your actual ads, they’ll instinctively resonate with your messaging – even if they don’t remember why.

Risk compensation theory

Back when the governments started really paying attention to automobile safety, Sam Peltzman discovered something interesting. Although safety features like seatbelts made people safer, the benefits of these features to the general public were lower than the government’s safety tests had predicted.

Why? The safer people felt in their cars, the more risks they took when they drove.

Again, this gets back to how our brains are wired. Every decision carries some sort of inherent risk and we are constantly trying to decide whether the risk is worth the reward.

However, this decision isn’t a matter of pure math. Our internal risk-benefit analysis isn’t based on hard data; it’s based on the perceived risks and benefits.

So, if someone is considering cutting in front of a semi and they aren’t wearing their seatbelt, the perceived risk is pretty high. If their timing is even a little bit off, they could be thrown from the car and die. But, if they think that their seatbelt will protect them, the perceived risk is a lot lower, so they’re more likely to decide that the risk of being in an accident is worth the benefit of saving a few seconds on their commute.

This same principle applies to marketing. No matter what you’re trying to get people to do, your customers will associate some level of risk with it: loss of money, loss of time, loss of privacy, etc. Your job is to increase the perceived benefits and decrease the perceived risks so that people feel safe doing what you want them to.

How to use risk compensation theory

If you do it right, people should feel like there is more risk in not following your call-to-action (CTA) than there is in following it. Essentially, this flips the risk-benefit equation on its head.

Of course, achieving that is easier said than done, but here are a few ways to maximize the perceived benefits and minimize the perceived risks of your CTA:

Use social proof

Social proof is one of the best tools in the online marketer’s toolbox. These days, people instinctively distrust marketing (after all, you’re getting paid to promote your business). However, people trust other people, so including testimonials and reviews is a great way to tilt the risk-benefit equation in your favor.

That being said, simply having testimonials or reviews isn’t enough. You need social proof that makes people feel like giving you their money, time or information is a safe bet. If the reviews you share don’t inspire confidence (or worse, feel fake), they can actually work against you.

Use someone else’s halo

A similar, but a different way to decrease perceived risk is to associate yourself with a third party your audience already trusts.

The classic example of this is getting certified for a trust seal from a third-party business. However, this tactic has been used for so long that it’s become an expectation. These days, having a trust seal might not decrease the perceived risk of doing business with you, but not having one can certainly increase it!

If you want to use someone else’s trustworthiness to build trust in your business, one of the easiest approaches is influencer marketing. Unlike testimonials – which are usually placed on your marketing materials – influencers advocate for your business to their loyal following.

Because influencers already have a high level of trust with their audience, their endorsement of your business feels far more credible and meaningful than anything you could ever say about yourself. It’s a simple, but highly effective way to use someone else’s halo to build trust in your business.

Conclusion

Honestly, the mere exposure effect and risk compensation theory are just the tip of the iceberg. There are dozens of other cognitive biases that you can use in your marketing to encourage people to choose your business over the competition.

The trick is understanding how different cognitive biases play into the purchasing decisions of your target audience. Once you understand that, it’s usually fairly easy to influence those biases in a way that favors your business.


Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.


About The Author

Jacob is passionate entrepreneur on a mission to grow businesses using PPC & CRO. As the Founder & CEO of Disruptive Advertising, Jacob has developed an award-winning and world-class organization that has now helped over 2,000 businesses grow their online revenue. Connect with him on Twitter.

Source link

Business evolution happens in experimentation sprints: Insights from André Morys, GO Group Digital

Business evolution happens in experimentation sprints: Insights from André Morys, GO Group Digital

Many executives are seeking a “digital transformation” as a lofty solution.

But transformative change really happens in sprints.

That’s because experimentation is the agile approach to business evolution.

When it comes to business evolution—you don’t know, what you don’t know. If you aren’t learning, your business is not evolving at the pace you need to surpass your competitors.

Let’s start with a story…

In the late 1990s, business leaders were hyped on the possibilities the internet offered. Countless start-ups sprouted up to get an early grasp on web business.

But by 2002, when the dot.com bubble burst, many companies struggled to survive. Companies like Pets.com, WorldCom, and WebVan failed completely, and other organizations experienced declining revenues after a period of optimistic growth.

WiderFunnel André Morys konversionsKRAFT
A young, confident André Morys with his business partner at the start of konversionsKRAFT in 1996. (Source: André Morys)

André Morys, Managing Partner of GO Group Digital and Co-Founder of konversionsKRAFT, lived through this experience. In the first five years, it seemed like he had hit pay dirt with his business. His team was confident about their future direction until 2002 when they struggled with declining revenue of 60% over three months.

But when André reflects back on his company’s history, this struggle provided an unprecedented opportunity to learn about leadership and finance, culture and motivation. The aftermath of the dot.com bubble accelerated his understanding of how to strategically evolve his business for future growth.

There are many parallels in today’s market. 52% of the Fortune 500 since 2000 don’t exist anymore. And business leaders are constantly battling this threat.

Today, many Executive teams are aspiring to the “digital transformation” solution because how can you keep pace with the market, with the rapid technological change?

WiderFunnel André Morys Unbounce CTA Conference Presentation GO Group Digital
André Morys, Managing Partner of GO Group Digital, presented “The Source of Disruption Is in the Mind of Your Customer” at Unbounce’s 2018 CTA conference.

In this post, you will learn key insights from André Morys, adapted from his presentation, “The Source of Disruption Is in the Mind of the Customer,” at Unbounce’s 2018 CTA conference.

These key insights include:

  • Why we should be focusing on velocity of learnings (not tests) by prioritizing impactful experiments
  • How to speak the same language as your Executive team by pinpointing their emotions and motivations
  • And why a valuable customer experience is at the heart of business evolution.

Scoping out the big picture: The Gartner Hype Cycle

Roy Amara, a researcher, scientist and futurist, claimed that we tend to overestimate the effects of technology in the short term and underestimate the effects in the long term. This phenomenon is called Amara’s Law.

When we think big picture about digital transformation, this forecast is true. We are hyped up on the new tools and technologies when we first adopt them, but once they present challenges, we can become discouraged. Because how can really leverage technology to solve our business problems?

WiderFunnel The Gartner Hype Cycle for Digital Transformation
The Gartner Hype Cycle is a framework for viewing the path from adoption to actually driving business decisions. (Source: Gartner )

The research firm, Gartner, furthered Amara’s Law by introducing the concept of a hype cycle. When it comes to experimentation, WiderFunnel traces the maturity of organizations through its five different stages, including:

  1. The Technology Trigger: You are excited at the possibilities of experimentation but business impact is yet to be proven at this initial stage.
  2. Peak of Inflated Expectations: Early adopters claim success with testing at this stage, but you might be failing to properly leverage experimentation as a strategy.
  3. The Trough of Disillusionment: Initial hype is tapering off. Internal ennthusiasm fades. You might recognize at this stage that experimentation must provide results to continue the investment.
  4. The Slope of Enlightenment: Experimentation is starting to show its possibilities as you understand how to better leverage testing to create business impact.
  5. And the Plateau of Productivity: With consistent bottom-line impacts, you can now start to leverage experimentation as an organizational strategy for business evolution.

Technology can be a solution to business evolution, but leaders need to strategize how to leverage technology to solve real business problems. André articulated that such challenges are actually what is driving your digital transformation.

WiderFunnel André Morys Digital Transformation
When you embrace the pain, you can start to understand the truth to make your business grow, according to André.

As you start to scale your experimentation program, these learnings make your team’s workflow more efficient and allow you to zero in on the hypotheses that can make the most impact.

The good news: You can accelerate your learnings for how to evolve your business through experimentation. Even if these seem small wins, compounded over time, you are truly driving your organization’s growth through digital technology. (For example, what is the calculated impact of a reported 2% lift over 50 experiments? It’s not 100%; it’s a compounded 264%!)

WiderFunnel Digital Transformation throug Experimentation
Experimentation is the agile approach to digital transformation. It facilitates data-driven decision making. (Source: André Morys)

Once organizations introduce a defined process and protocol, have systems and procedures in place for prioritizing experiments by impact, they are able to scale their programs for long-term business evolution.

Relevant resource

Addressing your strategic blind spots: The Dunning-Kruger Effect

It is far more common for people to allow ego to stand in the way of learning.

If you are relying on the HiPPO’s strategy for business evolution, how confident are you in their abilities? And do you think they have the competence to judge their limitations?

When André reflects back on his first five years of business before the dot.com bubble burst, he sees how his confidence was an example of the Dunning-Kruger Effect.

The Dunning-Kruger Effect is a cognitive bias where an individual with low ability have mistaken confidence and believe they are more competent than they are.

And people with high competence often view themselves as having lower abilities than in actuality. As André states, “You don’t know, what you don’t know.

WiderFunnel The Dunning-Kruger Effect
The Dunning-Kruger demonstrates how people with low abilities can overestimate their competency, and people with high abilities can underestimate their competency. Hello imposter syndrome!

This cognitive bias has been explored in depth by psychologists David Dunning and Justin Kruger. Their research shows that people that suffer from the Dunning-Kruger Effect may resist constructive criticism if it doesn’t align with their own self-perception. They may question the evaluation and even deem the process as flawed.

Kruger and Dunning’s interpretation is that accurately assessing skill level relies on some of the same core abilities as actually performing that skill, so the least competent suffer a double deficit. Not only are they incompetent, but they lack the mental tools to judge their own incompetence.

When it comes to innovation, the Dunning-Kruger Effect creates a blind spot for threats and opportunities that can affect your business success. Instead, a business leader needs to always interrogate their perception of reality to get closer to the truth.

Truth―more precisely, an accurate understanding of reality―is the essential foundation for producing good outcomes.

André now sees how the growth and learning that came out of the dot.com bubble challenged his own self-perception. He began to understand the implications of his business decisions, and he became more in tune with the possibilities of the unknown.

WiderFunnel André Morys konversionsKraft
Today, konversionsKraft has grown to a team of 85. The downturn from the dot.com bubble lead to increased learnings in growth, management, culture, finance, leadership, motivation, and more. (Source: André Morys)

Applying his own professional and personal learnings from this experience to the world of experimentation, André sees a chasm between the manager’s aspirations of digital transformation and the optimizer’s experiments that lead to the desired data-driven decision making.

The problem is that [the Dunning-Kruger Effect] happens in organizations all the time,” explains André.

Many optimizers are very skilled in experimentation, they know everything about it: about A/B Testing, confidence levels and statistics, testing tools and psychology. Whereas management has no idea; they don’t get it. They are talking about digital transformation as a big project, while their optimization team is really doing the work that is needed.

What makes André’s argument that organizations need to focus on learnings so compelling is that a culture of experimentation—testing and learning—can really drive your business evolution and lead to the hockey-stick growth that you need to sustain your market.

But the Executive team and the tactical experts need to get on the same page, especially when it comes to successful business evolution.

As an Optimization Champion, you are the catalyst.

Change-agents build bridges between their peers, empowering them to accept change as it comes. They understand how to build and nurture relationships in order to find common ground with others. They are organized and understand how to speak to c-level executives clearly.

André’s comparison of Optimization Champions and Executive teams within an organization with the Dunning-Kruger effect is insightful.

He argued that Optimization Champions have a high-level of competence, but they don’t understand how they can position their work to gain Executive buy-in because they are too immersed into their specialization.

But he also emphasized that Optimization Champions truly are the catalysts of digital transformation. His advice is simple: Optimization Champions need to speak the same language as the Executive team.

Optimizers should stop talking about uplifts and statistics and A/B tests. They should talk about what A/B testing changes within an organization. They should report business impact, not statistical confidence levels, so they are compatible, so they are speaking at the manager’s level.

Experimentation drives the digital transformation at many successful organizations. Just look at Airbnb, or Uber, or Facebook. These organizations test and learn their way to business evolution.

André points out that experimentation facilitates an organization’s digital transformation, but many managers just don’t know it yet.

Your communication of experimentation’s value needs to be accessible to those who aren’t educated in the technical aspects.

And that’s exactly what André recommends. Understand your experimentation program’s internal stakeholders—your Executive team. Understand their fears and anxieties, their emotions and motivations when communicating your experimentation program’s value.

WiderFunnel André Morys Evangelizing experimentation personas of internal stakeholders
André recommends creating personas for your internal stakeholders so you can communicate the value of your experimentation program in a way that they’ll appreciate. (Source: André Morys)

As a marketer, you are best prepared for this task as you can craft your internal stakeholder’s persona, so you can demonstrate—in their language—the impact of your program on the business.

Featured Resource

Build your internal stakeholder persona to get buy-in!

Dive into the emotions and motivations that will resonate with your internal stakeholders to start proving the value of your experimentation program.

In your experimentation sprints, prioritize business impact over speed.

If you’re not failing, you’re not pushing your limits, and if you’re not pushing your limits, you’re not maximizing your potential.

In the world of experimentation, there is an emphasis on experimentation velocity. It makes sense: the more tests that you are running, the more insights you can obtain about your business.

But the buzz around velocity has led many leaders to focus on speed, rather than the quality of insights and the business impact of experiments.

And if you aren’t focusing on business impact, you’ll never get on the same page as your Executive team.

If you decide to test many ideas, quickly, you are sacrificing your ability to really validate and leverage an idea. One winning A/B test may mean quick conversion rate lift, but it doesn’t mean you’ve explored the full potential of that idea.

Experimentation sprints are chock full of business insights and impact—exactly what organizations need to continuously evolve their businesses.

That’s why our emphasis as optimizers should be on velocity of learnings, not just experiments.

It’s an agile approach to business evolution. And that’s a sentiment that was echoed by Johnny Russo, in “The 5 Pillars of Digital Transformation Strategy at Mark’s.”

Because how do we process all this change and learning, without being efficient?” he described.

Those organizations first to adapt will be most prepared. And so I think the foundation has to be an agile methodology.

But optimizers need to ensure they are driving higher impact experiments and deeper learnings by implementing rigorous processes. A prioritization framework ensures you are launching experiments that have the highest potential for results, on the most important areas of your business, with the easiest implementation.

But to increase experiment velocity, you need a defined optimization process.

According to the “State of Experimentation Maturity 2018” original research report, experiment velocity is a focal point for most organizations.

WiderFunnel State of Experimentation Maturity Velocity Increase
The majority of both Small and Medium Enterprises (52%) and Large Enterprises (64%) plan to increase experiment velocity in the next 12 months.

52% of Small and Medium Enterprises and 64% of Large Enterprises in the survey indicated they plan to increase experiment velocity in the next year.

However, only 24% and 23% (respectively) of these organizations plan to increase budget, which can only add emphasis on workflow efficiency and prioritization so that you are not straining your resources.

One of the most common roadblocks to increasing velocity is workflow efficiency. Review and document your workflows from ideation to analysis to ensure seamless experiment execution,” explains Natasha Wahid in the research report.

Another common roadblock is a lack of resources, particularly in Design and Web Development. Ensure you have the right team in place to up your velocity, and plan for possible bottlenecks.

As André articulated, focusing on business impact instead of speed will ensure you are learning faster than your competition.

That’s because digital transformation is not about implementing technology, it’s about leveraging technology to accelerate your business.

How to use the PIE Prioritization Framework to identify the most impactful experiments.

You can’t test everywhere or everything at once. With limited time and resources and, most importantly, limited traffic to allocate to each test, prioritization is a key part of your experimentation plan.

Prioritizing where you invest energy will give you better returns by emphasizing pages that are more important to your business.

The PIE Framework is made up of the three criteria you should consider to prioritize which pages to test and in which order: Potential, Importance, and Ease.

WiderFunnel PIE Prioritization Framework
The PIE Prioritization framework allows you to zero in on those experiments that can drive the most business impact. That’s how you get executive-level buy-in!

Potential: How much improvement can be made on this page(s)? You should prioritize your worst performers. This should take into account your web analytics data, customer data, and expert heuristic analysis of user scenarios.

Importance: How valuable is the traffic to this page(s)? Your most important pages are those with the highest volume and the costliest traffic. You may have identified pages that perform terribly, but if they don’t have significant volume of costly traffic, they aren’t experimentation priorities.

Ease: How difficult will it be to implement an experiment on a page or template? The final consideration is the degree of difficulty of actually running a test on this page, which includes technical implementation, and organizational or political barriers.

The less time and resources you need to invest for the same return, the better. This includes both technical and “political” ease. A page that would be technically easy to test on may have many stakeholders or vested interests that can cause barriers (like your homepage, for example).

You can quantify each of your potential opportunities based on these criteria to create your test priority list. We use the PIE Framework in a table to turn all of these data inputs into an objective number ranking.

Learn more about PIE

Seek the “truth” in a delightful customer experience.

Imagine your a taxi service. And you are seeing Uber’s market success as a threat to your livelihood.

What makes them more successful?

Get out the whiteboards and some might write: “We need an app!” or “We need to hire a data scientist!” Because on the superficial level, data and technology seem pivotal to a digital transformation.

But when you look deeper at Uber’s strategy, you will see that they focus on delighting their customers with their experience.

And because of Uber’s success, how people get rides has radically changed.

WiderFunnel Uber Growth Customer Experience
Look at the hockey-stick growth of Uber. It’s all attributed to a valuable customer experience. (Source: Spaceotechnologies.com )

But Uber is not the only example.

Many businesses have transformed the market through delightful customer experiences.

Amazon makes online ordering a breeze. No more long wait times, André joked that he has to slow down their service so that he will actually be there to accept the order.

And Airbnb—they’ve made vacationing a unique and desirable experience.

André, an expert in emotional marketing and the Limbic Model, emphasized the need to go beyond conversions and focus on a customer experience that delights.

Companies who fail to embrace CX as strategic path to growth won’t just be lagging, they’ll get left behind.

And that’s because you can drive experiments with the most business impact by honing in on your customer experience.

That means, you have to understand your customer—their fears and anxieties, their thoughts and desires—when designing an experience to meet their emotional needs and states.

The most successful organizations have honed in on what makes their experience delightful. In “Moving the needle: Strategic metric setting for your experimentation program,” I talked about creating a True North metric that align to your value proposition, as a way of creating internal focus on your customer experience.

Slack, a team collaboration program focuses on optimizing for teams that send over 2,000 messages.

LinkedIn focuses on quality sign-up, ensuring that new users are actively making connections on the social platform.

And Facebook optimizes for new users who make 10 friends in seven days.

See it’s not tools and technologies that evolve these businesses. It’s how they leverage digital technologies across the organization to solve real business problems.

And when you align your customer experience goals with your experimentation program, you are the competition.

You learn and adapt with each new experiment to what your customer wants and needs from your experience. That’s how you drive business impact. That’s how you get internal buy-in.

But what’s more—that’s how you stay relevant in the market.

And if you are experimenting, keep slogging through the “trough of disillusionment”.

André Morys knows from experience that business that do, can reach enlightenment.

What are your biggest challenges with experimentation that can be attributed to the “trough of disillusionment”? We’d love to hear your comments!

Author

Lindsay Kwan

Marketing Communications Specialist

In this roadmap for the executive, we explore the biggest challenges faced by enterprise organizations as they work to embed experimentation within their infrastructure – and how to surmount them.

Get your roadmap

Source link

How data visualization enhances business decision making

How data visualization enhances business decision making

Data visualization is the art of our age.

Just as Michelangelo approached that giant block of Carrara marble and said, “I saw the angel in the marble and carved it until I set him free,” analysts are approaching data with the same visionary and inquisitive mind.

In today’s age, where big data reigns, the art of data analysis is making sense of our world.

Analysts are chiseling the bulk of raw data to create meaning—patterns and associations, maps and models— to help us draw insights, understand trends, and even make decisions from the stories the data tell.

Data visualization is the graphical display of abstract information for two purposes: sense-making (also called data analysis) and communication. Important stories live in our data and data visualization is a powerful means to discover and understand these stories, and then to present them to others.

But in presenting such complex information, data analysis is not easily computable to the human brain until it is presented in a data visualization.

Tables, charts, and graphs provide powerful representations of numerous data points so that the insights and trends are easily understood by the human brain.

That’s why data visualization is one of the most persuasive techniques to evangelize experimentation today—particularly in an era of ever-decreasing attention spans.

On a slide. On a dashboard in Google Data Studio. Or simply something you plan to sketch on a whiteboard. This presentation of the data will decide if your trends and insights are understood, accepted and inferences drawn as to what action should be taken.

A thoughtfully crafted visualization conveys an abundance of complex information using relatively little space and by leveraging our visual system—whether that’s the optimal number of lead generation form fields or the potential ROI of your program throughout the quarter.

In this post, we dig into the practice of designing data visualizations for your audience. You will learn:

  • How your data visualizations can enhance the Executive decision-making process, using the guidelines of the Cynefin Framework
  • Why data visualizations are the most powerful way for the human brain to compute complex information through dual processing theory
  • What makes data visualizations effective using the five qualities defined by expert Alberto Cairo
  • And a real-world example of how you can problem-solve a problem to result in the most effective data visualization for your audience.

The Brain (Or, why we need data visualization)

You may be familiar with System 1 and System 2 thinking, known as dual processing theory. System 1 (or Type 1) is the predominant fast, instinctual decision-making and System 2 (Type 2) is the slow, rational decision-making.

DualProcessTheory

Dual Process Theory

Dual Process Theory categorizes human thinking into two types or systems.

Share the insight:

We often relegate System 1 thinking to your audience’s emotions. (We talked about it in “Evangelizing experimentation: A strategy for scaling your test and learn culture” or in “I feel, therefore I buy: How your users make buying decisions.”)

But that immediate grasp over complex information in a data visualization is also related to System 1 thinking.

A large part of our brain is dedicated to visual processing. It’s instinctual. It’s immediate.

If you have a strong data visualization, every sighted person can understand the information at hand. A seemingly simple 5×5 chart can provide a snapshot of thousands of data points.

In other words, visualizing data with preattentive features in mind is akin to designing ergonomic objects: You know that a sofa is made for sitting. You know that a handle on a coffee mug is designed for your hand. (This is called preattentive processing.)

Preattentive processing occurs before conscious attention. Preattentive features are processed very quickly…within around 10 milliseconds.

When creating data visualizations, you are designing for human physiology. Any other method of translating that information is a disservice to your message and your audience.

When we consider the speed of which people understand the multiple data points in a problem through dual processing theory and preattentive processing, it’s almost foolish not to take advantage of data visualization.

When you design data visualizations, you are understanding your audience.

Understanding how Executives make decisions

A data visualization is a display of data designed to enable analysis, exploration, and discovery. Data visualizations aren’t intended mainly to convey messages that are predefined by their designers. Instead they are often conceived as tools that let people extract their own conclusions from the data.

Data analysis allows Executives to weigh the alternatives of different outcomes of their decisions.

And data visualizations can be the most powerful tool in your arsenal, because your audience can see thousands of data points on a simple chart.

Your data visualization allows your audience to gauge (in seconds!) a more complete picture so they can make sense of the story the data tell.

In Jeanne Moore’s article “Data Visualization in Support of Executive Decision Making,” the author explored the nature of strategic decision making through the Cynefin framework.

The Cynefin Framework

The Cynefin Framework aids Executives in determining how to best respond to situations by categorizing them in five domains: Simple, Complicated, Complex, Chaotic and Disorder. Source: HBR’s A Leader’s Framework for Decision Making

Share the insight:

The Cynefin Framework

The Cynefin Framework (pronounced ku-nev-in) allows business leaders to categorize issues into five domains, based on the ability to predetermine the cause and effect of their decisions.

Created by David Snowden in 1999 when he worked for IBM Global Services, the Cynefin framework has since informed leadership decision making at countless organizations.

The five domains of the Cynefin Framework are:

  • In the Simple Domain, there is a clear cause and effect. The results of the decision are easy to predict and can be based on processes, best practices, or historical knowledge. Leaders must sense, categorize and respond to issues.
  • In the Complicated Domain, multiple answers exist. Though there is a relationship between cause and effect, it may not be clear at first (think known unknowns). Experts sense the situation, analyze it and respond to the situation.
  • In the Complex Domain, decisions can be clarified by emerging patterns. That’s because issues in this domain are susceptible to the unknown unknowns of the business landscape. Leaders must act, sense and respond.
  • In the Chaotic Domain, leaders must act to establish order to a chaotic situation (an organizational crisis!), and the further gauge where stability exists and doesn’t exist to get a handle on the situation and move it into the complex or complicated domain.
  • And in the Disorder Domain, the situation cannot be categorized in any of the four domains. It is utterly an unknown territory. Leaders can analyze the situation and categorize different parts of the problem into the other four domains.

In organizations, decision making is often related to the Complex Domain because business leaders are challenged to act in situations that are seemingly unclear or even unpredictable.

Leaders who try to impose order in a complex context will fail, but those who set the stage, step back a bit, allow patterns to emerge, and determine which ones are desirable will succeed. They will discern opportunities for innovation, creativity, and new business models.

David J. Snowden and Mary E. Boone

Poor quarterly results, management shifts, and even a merger—these Complex Domain scenarios are unpredictable, with several methods of responding, according to David J. Snowden and Mary E. Boone.

In other words, Executives need to test and learn to gather data on how to best proceed.

Leaders who don’t recognize that a complex domain requires a more experimental mode of management may become impatient when they don’t seem to be achieving the results they were aiming for. They may also find it difficult to tolerate failure, which is an essential aspect of experimental understanding,” explains David J. Snowden and Mary E. Boone.

Probing and sensing the scenario to determine a course of action can be assisted by data analyst to understand collaboratively the historical and current information at hand—in order to guide the next course of action.

An organization should take little interest in evaluating — and even less in justifying — past decisions. The totality of its interest should rest with how its data can inform its understanding of what is likely to happen in the future.

Of course, there is always the threat of oversimplifying issues, treating scenarios like they have easy answers.

But even with situations in the other domains of the Cynefin Framework, data visualization can provide insight into next steps—if they meet certain criteria.

What makes an effective data visualization

The presenter of the visualization must also provide a guiding force to assist the executive in reaching a final decision, but not actually formulate the decision for the executive.

With data visualization, there will always be insightful examples and examples that clearly missed the mark.

Avinash Kaushik, in his Occam Razor’s article, “Closing Data’s Last-Mile Gap: Visualizing For Impact!” called the ability for data visualizations to influence the Executive’s decision-making process closing the “last-mile” gap.

It can take an incredible effort to gather, sort, analyze and glean insights and trends from your data. If your analysis is solid, if your insights and trends are enlightening, you don’t want to muddle your audience with a confusing data visualization.

Remember: a data visualization is only as impactful as its design is on your audience.

In terms of the value in data visualization, it must provide simplicity, clarity, intuitiveness, insightfulness, gap, pattern and trending capability in a collaboration enabling manner, supporting the requirements and decision objectives of the executive.

Alberto Cairo’s Five Qualities of Great Data Visualizations

Alberto Cairo, author of “The Truthful Art: Data, Charts, and Maps for Communication,” outlines five qualities of great data visualizations. Your data visualization should be:

  1. Truthful: It should be based on thorough and objective research—just as a journalist is expected to represent the truth to the best of their abilities, so too is the data analyst.
  2. Functional: It should be accurate and allow your audience to act upon your information. For instance, they can perceive the incremental gains of your experimentation program over time in a sloping trendline.
  3. Beautiful: It needs to be well-designed. It needs to draw in your audience’s attention through an aesthetically pleasing display of information.
  4. Insightful: It needs to provide evidence that would be difficult to see otherwise. Trends, insights, and inferences must be drawn by the audience, in collaboration with the data analyst.
  5. Enlightening: It needs to illuminate your evidence. It needs to enlighten your audience with your information in a way that is easy to understand.

When you nail down all five of these criteria, your data visualization can shift your audience’s ways of thinking.

It can lead to those moments of clarity on what action to take next.

So, how are these design decisions made in data visualization?

Here’s an example.

Free questionnaire

Improve your data visualization’s impact!

Know where and how to improve your data visualization’s ability to tell a story to your audience.

How we make decisions about data visualization: An example in process

A note on framing: While the chart and data discussed below are real, the framing is artificial to protect confidentiality. The premise of this analysis is that we can generate better experiment ideas and prioritize future experiments by effectively communicating the insights available in the data.

Lead generation forms.

You probably come across these all the time in your web searches. Some forms have multiple fields and others have few—maybe enough for your name and email.

Suppose you manage thousands of sites, each with their own lead generation form—some long and some short. And you want to determine how many of fields you should require from your prospects.

If you require too many form fields, you’ll lose conversions; too few, and you’ll lose information to qualify those prospects.

It’s a tricky situation to balance.

Like all fun data challenges, it’s best to pare the problem down into smaller, manageable questions. In this case, the first question you should explore is the relationship between the number of required fields and the conversion rate. The question is:

How do conversion rates change when we vary the number of required fields?

Unlike lead quality—which can be harder to measure and is appraised much further down the funnel—analyzing the relationship between the number of required fields and the number of submissions is relatively straightforward with the right data in hand. (Cajoling the analytics suite to provide that data can be an interesting exercise in itself—some will not do so willingly.)

So, you query your analytics suite, and (assuming all goes well), you get back this summary table:

WiderFunnel Data Visualization Examples
On this table, how immediately do you register the differences between the average conversion rates? Note how you process the information—it’s System 2 thinking.

What’s the most effective way to convey the message in this data?

Most of you probably glossed over the table, and truth be told, I don’t blame you—it’s borderline rude to expect anyone to try to make sense of these many variables and numbers.

However, if you spend half a minute or so analyzing the table, you will make sense of what’s going on.

In this table format, you are processing the information using System 2 thinking—the cognitive way of understanding the data at hand.

On the other hand, note how immediate your understanding with a simple data visualization…

The bar graph

WiderFunnel Data Visualization Examples Bar Graph
Compared to the table above, the decrease in conversion rate between one and four required fields is immediately obvious, as is the upward trend after four. Your quick processing of these differences is System 1 thinking.

In terms of grasping the relationship in the data, it was pretty effective for a rough-and-ready chart.

In less than a second, you were able to see that conversion rates go down as you increase the number of required fields—but only until you hit four required fields. At this point, average conversion rates (intriguingly!) start to increase.

But you can do better…

For a good data visualization, you want to gracefully straddle the line between complexity and understanding:

How can we add layers of information and aesthetics that enrich the data visualization, without compromising understanding?

No matter how clever the choice of the information, and no matter how technologically impressive the encoding, a visualization fails if the decoding fails.

Adding layers of information can’t be at the expense of your message—rather, it has to be in service of that message and your audience. So, when you add anything to the chart above, the key question to keep in mind is:

Will this support or undermine making informed business decisions?

In this case, you can have some fun by going through a few iterations of the chart, to see if any visualization works better than the bar chart.

The dot plot

Compared to a bar chart, a dot plot encodes the same information, while using fewer pixels (which lowers visual load), and unshackles you from a y-axis starting at zero (which is sometimes controversial, according to this Junk Charts article and this Stephanie Evergreen article).

In the context of digital experimentation, not starting the y-axis at zero generally makes sense because even small differences between conversion rates often translate into significant business impact (depending on number of visitors, the monetary / lifetime value of each conversion, etc.).

In other words, you should design your visualization to make apparent small differences in conversion rates because these differences are meaningful—in this sense, you’re using the visualization like researchers use a microscope.

If you are still not convinced, an even better idea (especially for an internal presentation) would be to map conversion rate differences to revenue—in that case, these small differences would be amplified by your site’s traffic and conversion goal’s monetary value, which would make trends easier to spot even if you start at 0.

Either way, as long as the dots are distant enough, large enough to stand out but small enough to not overlap along any axis, reading the chart isn’t significantly affected.

WiderFunnel Data Visualization Examples Dot Plot
Compared to the bar chart, the dot plot lowers the visual load, gives us flexibility with our y-axis (it does not start at 0), allowing us to emphasize the trend.

More importantly (spoiler alert!), our newly-found real estate (after changing from bars to dots) allows you to add layers of information without cluttering the data visualization.

One such layer is the data’s density (or distribution), represented by a density plot.

A density plot

A density plot uses the height of the curve to show roughly how many data points (what percentage of sites) require how many fields. In this case, the density plot adds the third column (“Percent of Sites”) from the table you saw earlier.

That makes it easy to see (once you understand how density plots work) how much stock to place in those averages.

For example, an average that is calculated on a small number of sites (say, less than 1% of the available data) is not as important or informative as an average that represents a greater number of sites.

So, if an average was calculated based on a mere ten sites, we would be more wary of drawing any inferences pertaining to that average.

WiderFunnel Data Visualization Example Plot Graph Density Plot
After adding the density plot, you can see that most sites require two fields, roughly the same require one and three, and after eight required fields, the distribution is pretty much flat—meaning that we don’t have many data points. So, those incredibly high conversion rates (relative to the rest) are fairly noisy and unrepresentative—something we’ll verify with confidence intervals later on.

Visualizing uncertainty and confidence intervals

When we add the density plot, we see that most of our data comes from sites that require between one and four fields (80%, if you added the percentages in the table), the next big chunk (19%) come from sites that require five to nine fields, and the remaining 1% (represented by the flat sections of the density curve) require more than nine. (The 80/20 rule strikes again!)

Another useful layer of information is the confidence interval for these averages. Given the underlying data (and how few data points go into some averages), how can we represent our confidence (or uncertainty) surrounding each average?

Explaining Confidence Intervals

If you’ve never encountered confidence intervals before, here’s a quick example to explain the intuition behind them…

Let’s say you’re taking a friend camping for three days, and you want to give them enough information so they can pack appropriately.

You check the forecast and see lows of 70°F, highs of 73°F, and an average of 72°F.

So, when you tell your friend “it’s going to be about 72°F“—you’re fairly confident that you’ve given them enough information to enjoy the trip (in terms of packing and preparing for the weather, of course).

On the other hand, suppose you’re camping in a desert that’s expecting lows of 43 °F, highs of 100°F, and (uh oh) an average of 72°F.

Assuming you want this person to travel with you again, you probably wouldn’t say, “it’s going to be about 72°F.” The information you provided would not support them in making an informed decision about what to bring.

That’s the idea behind confidence intervals: they represent uncertainty surrounding the average, given the range of the data, thereby supporting better decisions.

Visually, confidence intervals are represented as lines (error bars) that extend from the point estimate to the upper and lower bounds of our estimate: the longer the lines, the wider our interval, the more variability around the average.

When the data are spread out, confidence intervals are wider, and our point estimate is less representative of the individual points.

Conversely, when the data are closer together, confidence intervals are narrower, and the point estimate is more representative of the individual points.

WiderFunnel Data Visualization Examples Dot Plot Confidence Intervals
Once you add error bars, you can see that many of those enticingly high conversion rates are muddled by uncertainty: at twelve required fields, the conversion rate ranges from less than 10% to more than 17%! Though less extreme, a similar concern holds for data points at ten and eleven required fields. What’s happening at thirteen, though?

At this point, there are two things to note: first, when you look at this chart, your attention will most likely be drawn to the points with the widest confidence intervals.

That is, the noisiest estimates (the ones with fewer data points and / or more variability) take up the most real estate and command the most attention.

Obviously, this is not ideal—you want to draw attention to the more robust and informative estimates: those with lots of data and narrower intervals.

Second, the absence of a confidence interval around thirteen required fields means that either there’s only one data point (which is likely the case, given the density curve we saw earlier), or all the points have the same average conversion rate (not very likely).

Luckily, both issues have the same solution: cut them out.

How to best handle outliers is a lively topic—especially since removing outliers can be abused to contort the data to fit our desired outcomes. In this case, however, there are several good reasons to do so.

The first two reasons have already been mentioned—these outliers come from less than 1% of our entire data set: so, despite removing them, we are still representing 99% of our data.

Second, they are not very reliable or representative, as evidenced by the density curve and the error bars.

Finally, and more importantly—we are not distorting the pattern in the data: we’re still showing the unexpected increase in the average conversion rate beyond four required fields.

We are doing so, however, using the more reliable data points, without giving undue attention to the lower quality ones.

Lastly, to visualize and quantify our answer to the question that sparked the whole analysis (how do conversion rates change when we vary the number of required fields?), we can add two simple linear regressions: the first going from one to four required fields, the second from four to nine required fields.

Why two, instead of the usual one?

Because we saw from the density chart discussion that 80% of our data comes from sites requiring one to four fields, a subset that shows a strong downward trend.

Given the strength of that trend, and that it spans the bulk of our data, it’s worth quantifying and understanding, rather than diluting it with the upward trend from the other 20%.

That remaining 20%, then, warrants a deeper analysis: what’s going on there—why are conversion rates increasing?

The answer to that will not be covered in this article, but here’s something to consider: could there be qualitative differences between sites, beyond four required fields? Either way, the regression lines make the trends in the data clearer to spot.

WiderFunnel Data Visualization Examples Dot Plot Regression Line
The regression lines draw attention to the core trend in the data, while also yielding a rough estimate of how much conversion rates decrease with the increase in required fields.

After adding the regression line, you summarize the main take-away with a nice, succinct subtitle:

Increasing the number of Required Fields from one to four decreases average conversion rate by 1.2% per additional field, for 80% of sites.

This caption helps orient anyone looking at the chart for the first time—especially since we’ve added several elements to provide more context.

Note how the one statement spans the three main layers of information we’ve visualized:

  1. The average conversion rate (as point estimates)
  2. The distribution of the data (the density curve)
  3. The observed trend

Thus, we’ve taken a solid first pass at answering the question:

How do conversion rates change when we vary the number of required fields?

Does this mean that all sites in that 80% will lose ~1% conversion rate for every required field after the first?

Of course not.

As mentioned in the opening section, this is the simplest question that’ll provide some insight into the problem at hand. The lowest-hanging fruit, if you will.

However, it is far from a complete answer.

You’ve gently bumped into the natural limitation of bivariate analyses (an analysis with only two variables involved): you’re only looking at the change in conversion rate through the lens of the number of required fields, when there are obviously more variables at play (the type of site, the client base, etc.).

Before making any business decisions, you would need a deeper dive into those other variables, (ideally!) incorporate lead quality metrics, to have a better understanding of how the number of required fields impacts total revenue.

And this is where you come back full circle to experimentation: you can use this initial incursion to start formulating and prioritizing better experiment ideas.

For example, a successful experimentation strategy in this context would have to, first, better understand the two groups of sites discussed earlier: those in the 80% and those in the other 20%.

Additionally, more specific tests (i.e., those targeting sub-domains) would have to consider whether a site belongs to the first group (where conversion rates decrease as the number of required fields increase) or the second group (where the inverse happens)—and why.

Then, we can look at which variables might explain this difference, and what values these variables take for that site.

For example, are sites in the first group B2C or B2B? Do they sell more or less expensive goods? Do they serve different or overlapping geographic regions?

In short, you’ve used data visualization to illuminate a crucial relationship to stakeholders, and to identify knowledge gaps when considering customer behaviour across a range of sites.

Addressing these gaps would yield even more valuable insights in the iterative process of data analysis.

And these insights, in turn, can guide the experimentation process and improve business outcomes.

Your audience needs to trust your data visualization—and you.

When your experimentation team and Executives can get into the boardroom together, it’s disruptive to your business. It shakes your organization from the status quo, because it introduces new ways of making decisions.

Data-driven decisions are proven to be more effective.

In fact, The Sloan School of Business surveyed 179 large publicly traded firms and found that those that used data to inform their decisions increased productivity and output by 5-6%.

And data analysts have the power to make decision-making among Executive teams more informed.

Relying not on the Executive’s ability to rationalize through the five domains of the Cynefin Framework, data visualization presents the true power of experimentation. And the ability for experimentation to solve real business problems.

But like any working dynamic, you need to foster trust—especially when you are communicating the insights and trends of data. You need to appear objective and informed.

You need to guide your audience through the avenues of action that are made clear by your analysis.

Of course, you can do this through speech. But you can also do this through the design of your data visualizations.

WiderFunnel Data Visualization Examples Dashboard
Data visualizations help your Executive team keep a pulse on what is happening in your experimentation program and allow them to understand how it can impact internal decision making.

Whether you are presenting them in a dashboard where your team can keep a pulse on what’s happening with your experimentation program, or if it’s a simple bar graph or dot plot in your slide deck, your data visualizations matter.

Clear labeling and captions, graphic elements that showcase your data dimensions, lowering visual load, and even using color to distinguish elements in your data visualization—these help your audience see what possibilities exist.

They help your audience identify patterns and associations—and even ask questions that can be further validated through experimentation.

Because experimentation takes the guesswork out of decision making. Your data visualizations make it easier for the Executive to navigate the complexity of situations they are challenged today.

And that is, ultimately, the most persuasive way to evangelize experimentation at your organization.

How impactful have you found strong data visualizations on your team’s decision-making process? We’d love to hear about it.

Author

Wilfredo Contreras

Senior Data Analyst

Contributors

Lindsay Kwan

Marketing Communications Specialist

Benchmark your experimentation maturity with our new 7-minute maturity assessment and get proven strategies to develop an insight-driving growth machine.

Get started

Source link

Retail manufacturers, Walmart wants your ad business

Retail manufacturers, Walmart wants your ad business

Walmart says its ready to start monetizing its shopper data and become a go-to ad network for retail manufacturers — both online and in-store.

With Amazon’s rapid ad growth looming large, Walmart is bringing its ad sales in-house, bridging the divide between store and digital ad teams as part of a broader effort to build up its advertising business.

Why you should care

Company executives said they want CMOs to consider “Walmart as a network I can go advertise on.” Steve Bratspies, chief merchandising officer for Walmart U.S., thinks Walmart’s unique combination of online and store purchase data from hundreds of millions of customers will make Walmart ad buys more efficient that the competition (e.g. Amazon).

Walmart is ending its relationship with Triad, which managed ad sales on the retailer’s sites and other digital properties, The Wall Street Journal first reported Tuesday. WPP’s programmatic media unit Xaxis acquired Triad in 2017.

As Amazon can attest, building up an in-house team and technology stack takes time and investment. Walmart’s efforts could take years to realize.

More on the news

  • The company plans to harness its shopper data to sell advertising and marketing opportunities to brands and manufacturers directly. For example, the Journal said, rather than having Triad manage digital ad campaigns and a Walmart team managing in-store sampling, a snack supplier will be able to work with Walmart directly on both efforts.
  • Amazon’s market share is still dwarfed by Google and Facebook, but it offers up a proven model for attracting ad dollars from digital and trade marketing budgets by leveraging shopper data — and tying ad investments directly to sales.
  • Walmart’s other properties include online marketplace Jet.com and video streaming service Vudu.

About The Author

Ginny Marvin is Third Door Media’s Editor-in-Chief, managing day-to-day editorial operations across all of our publications. Ginny writes about paid online marketing topics including paid search, paid social, display and retargeting for Search Engine Land, Marketing Land and MarTech Today. With more than 15 years of marketing experience, she has held both in-house and agency management positions. She can be found on Twitter as @ginnymarvin.

Source link