Europeans join wave of Boeing suspensions, Trump frets

Europeans join wave of Boeing suspensions, Trump frets

ADDIS ABABA/PARIS (Reuters) – Major European nations Britain, Germany and France joined a wave of suspensions of Boeing 737 MAX aircraft on Tuesday as U.S. President Donald Trump fretted over modern airplane design following a crash in Ethiopia that killed 157 people.

Suspension by respected European regulators was the worst setback yet for U.S. planemaker Boeing in the wake of Sunday’s crash and put pressure on the United States to follow suit.

In response, the world’s biggest planemaker, which has seen billions of dollars wiped off its market value, said it understood the decisions but retained “full confidence” in the 737 MAX and had safety as its priority.

The cause of Sunday’s crash, which followed another disaster with a 737 Max five months ago in Indonesia that killed 189 people, remains unknown.

October’s Lion Air crash is also unresolved but attention has focused so far on the role of a software system designed to push the plane down as well as airline training and maintenance.

Boeing says it plans to update the software in coming weeks.

There is no evidence yet whether the two crashes are linked.

“Pilots are no longer needed, but rather computer scientists from MIT,” Trump tweeted, lamenting that product developers always sought to go an unnecessary step further when “old and simpler” was superior.

“I don’t know about you, but I don’t want Albert Einstein to be my pilot. I want great flying professionals that are allowed to easily and quickly take control of a plane!” he added, without referring directly to Boeing or recent accidents.

VICTIMS FROM 30 NATIONS

Elsewhere in Europe, Ireland, Austria and Norwegian Air said they too would temporarily ground MAX 8 passenger jets as a precaution. Earlier, Singapore, Australia, Malaysia and Oman had also temporarily suspended the aircraft, following China, Indonesia and others the day before.

The European Aviation Safety Agency, which has a major role in overseeing the design of aircraft and monitors some airline operations, was expected to make a statement later on Tuesday.

Experts say it is too early to speculate on the reason for the crash. Most are caused by a unique chain of human and technical factors.

Given problems of identification at the charred disaster site, Ethiopian Airlines said it would take at least five days to start handing remains to families.

The victims came from more than 30 different nations, and included nearly two dozen U.N. staff.

“We are Muslim and have to bury our deceased immediately,” Noordin Mohamed, a 27-year-old Kenyan businessman whose brother and mother died, told Reuters.

“Losing a brother and mother in the same day and not having their bodies to bury is very painful,” he said in the Kenyan capital Nairobi where the plane had been due.

Flight ET 302 came down in a field soon after takeoff from Addis Ababa on Sunday, creating a fireball in a crater. It may take weeks or months to identify all the victims, who include a prize-winning author, a soccer official and a team of humanitarian workers.

A page of a flight crew operations manual is seen at the scene of the Ethiopian Airlines Flight ET 302 plane crash, near the town of Bishoftu, near Addis Ababa, Ethiopia March 12, 2019. REUTERS/Baz Ratner

The United States has said it remains safe to fly the planes. Still, two U.S. senators urged the Federal Aviation Administration to implement a temporary grounding.

Anxiety was also evident among some travelers, who rushed to find out from social media and travel agents whether they were booked to fly on 737 MAX planes.

If the black box recordings found at the Ethiopian crash site are undamaged, the cause of the crash could be identified quickly, although it typically takes a year for a full probe.

BETTER SOFTWARE

Boeing said it had been working since the Lion Air crash to enhance flight control software that would be deployed across the 737 MAX fleet in coming weeks.

The MAX 8 has new software that automatically pushes the plane’s nose down if a stall is detected.

The new variant of the 737, the world’s best-selling modern passenger aircraft, was viewed as the likely workhorse for global airlines for decades and another 4,661 are on order.

In Latin America, Gol in Brazil temporarily suspended MAX 8 flights, as did Argentina’s state airline Aerolineas Argentinas and Mexico’s Aeromexico.

In Asia, South Korean budget carrier Eastar Jet said it would temporarily ground its two 737 MAX 8s from Wednesday, while India ordered additional checks.

Still, major airlines from North America to the Middle East kept flying the 737 MAX. Southwest Airlines Co, which operates the largest fleet of 737 MAX 8s, said it remained confident in the safety of all its Boeing planes.

Slideshow (14 Images)

Boeing shares fell another 5.6 percent on Tuesday after having lost 5 percent on Monday.

Former FAA accident investigator Mike Daniel said the decision by regulators to ground the planes was premature. “To me it’s almost surreal how quickly some of the regulators are just grounding the aircraft without any factual information yet as a result of the investigation,” he told Reuters.

In Nairobi, the U.N. Environment Program set up a small memorial for Victor Tsang, a staff member who lost his life.

“Travel well my friend, see you on the other side,” said one entry in a condolence book beside a framed photograph, bouquet of flowers and candle. By mid-afternoon, 23 pages of the condolence book had been filled with over 250 names.

Additional reporting by Jamie Freed and Aradhana Aravindan in Singapore; Katharine Houreld and Hereward Holland in Nairobi; Eric Johnson in Seattle; James Pearson in Hanoi; Alexander Cornwell in Dubai; Heekyong Yang in Seoul; Tracy Rucinski in Chicago; David Shepardson in Washington; Writing by Andrew Cawthorne; Editing by Georgina Prodhan, Jon Boyle and Keith Weir

Source link

Five time-saving strategies Search Engine Watch

marketing automation for SEOs, five time-saving strategies

marketing automation for SEOs, five time-saving strategies

There are many resources out there for digital marketers to help speed up recurring tasks, leaving more time to focus on more strategic initiatives to help grow and expand accounts.

A few of the resources and tools that are out there are scripts, Excel tools and functions, automated rules, automated bid rules, and smart bidding. Each one of these resources can be powerful tools if used in the correct way. We will walk through their benefits and some examples below.

Scripts

Google Scripts are JavaScript codes that can be used to help control and make adjustments to your campaign performance.

They can also be used to pull data to help with daily help checks or reporting that you may look at daily or weekly. You don’t have to be a developer to use scripts, although if you were looking to write them from scratch, some background in JavaScript would help.

There are a bunch of blogs and resources out there that already contain written scripts that can help solve some of your problems. Some scripts that I enjoy using are link checker, ad groups with no active text ads, negative keyword conflict checker, account checker, and auto-add negative keyword.

Example of Google Scripts

Excel tools & functions

Excel can be an overwhelming tool for those who are just beginning and might not have used some of the more advanced functions. Excel can be a powerful tool to help you save time performing bidding, analyzing account performance, and doing competitor analysis.

A great example of an analysis that paid search marketers find themselves doing over and over again is competitor analysis by looking at Auction insights. It can take some time to download the data and create all the charts for both brand and non-brand.

A way to speed up the analysis is with pivot tables and pivot charts, allowing you to format the data one time and just refresh it every time you want to look at what competitors are coming in and out of the auction at any given time.

Automated rules

Automated rules can help speed up tasks that you might do often, but scheduled automated rules and scripts tend to have some overlap in some of the functionality.

I would recommend testing both and seeing which one works best for you. Some automated rules that can be applied are pausing and activating copy, which is useful for clients that constantly have promotions; you can also set rules to activate and pause campaigns, ad groups, and keywords.

There are also rules that are more focused on performance. You can create rules to pause low-performing ads or keywords and increase or decrease budget based on performance or day of the week.

Please note: It’s important to set parameters for budget changes, or Google could continue to push budgets into a range where you are not comfortable spending.

Example of Google Automated Rules

Automated bid rules

Automated bid rules are part of Google’s automated rules; you can adjust keyword bids based on different criteria. These rules are powerful tools that should be used with lots of consideration.

Think about the adjustments that you are making daily or weekly and the steps that you go through. There might be an opportunity for you to automate those steps if your process is always the same day to day or week to week. There are a few popular bidding adjustments that you can implement, raising keywords to the top of page bid, raising keywords to the first page bid, and adjusting bids based on average position. But as you can imagine, if you gave Google the power to increase bids to the top of the page on every keyword, it could start to spend out of control.

It is important to set limits on those types of rules and consider a smaller set of keywords maybe these are your top performers or you might have launched a new product and you want to ensure that you are at the top of the page to raise brand awareness.

Another popular rule would be adjusting based on cost per conversion, allowing you to increase or decrease bids based on cost per conversion and conversion volume.

Some things to think about here are the number of conversions, average position, and impression share. You don’t want to adjust keywords that don’t have enough data, and you don’t want to continue to push keywords that are already at the top of the page. Automated bid rules are customizable so you can determine the conversion, average position, and impression share limits.

The upshot with automated marketing functions is that they’re very powerful and can save tons of time, but often have the potential to go haywire if they’re not properly set up. Use them wisely, and give yourself bandwidth to pursue more strategic initiatives which I’ll cover in future posts.

Smart bidding

Google’s smart bidding has come a long way since enhanced CPC was introduced a few years ago. Google has introduced other bidding strategies to include maximize clicks, target impression share, target CPA, target ROAS (Return on Ad-Spend), and maximize conversions.

With these bidding strategies Google will take into account device, location, time of day, remarketing and language when pushing out bids. Google does recommend these strategies for advertisers with at least 30 conversions in the past 30 days for target CPA and 50 conversions for target ROAS.

Automated bidding is something that is best A/B tested with campaign drafts and experiments to ensure that it is the best decision for your business. In some cases, there might not be enough data to make the most data-driven decision and other bidding methods might be better for your business.

Example of Google Smart Bidding

Scripts, excel tools and function, automated rules, automated bid rules, and smart bidding are a few ways that marketers can speed up their time spent on recurring analysis and focus more on strategic initiatives to help further grow accounts.

Before trying to attack all five of these tools at once, think about which tools will save you the most time and start there.

Try to start with automated strategy once a week, you’ll never know how much time you can save.

Lauren Crain is a Client Services Lead in 3Q Digital’s SMB division, 3Q Incubate.

Related reading

A quick guide to SEO in 2019
luxury marketing search strategy, part one: consumer mindset
six tips for b2b paid search success
Google's RankBrain: Clearing up myths and misconceptions

Source link

Wealthy millennials boosting the art market

Wealthy millennials boosting the art market

ZURICH (Reuters) – The global art market experienced another uptick in 2018, helped by an increase in the spending power of millennials, a report published by UBS and Art Basel said on Friday.

The logo of Sotheby’s auction house is seen at a branch office in Zurich, Switzerland October 25, 2016. REUTERS/Arnd Wiegmann

A survey of wealthy individuals conducted by UBS and art economist Clare McAndrew for the report found millennials were buying art more actively and frequently taking to the internet to do so. It found that more of them were willing to shell out big money on art than their older peers.

They also provided a boost for female artists.

“For a generation that might never own a car, their appetite for buying art is encouraging,” UBS Group Chief Marketing Officer Johan Jervøe told Reuters.

“It may be a reflection of the unique and often experiential qualities of art and collectibles as long-term assets.”

Overall sales in the art market grew 7 percent to $67.4 billion in 2018, according to UBS and Art Basel’s third annual art market report.

People between 22 and 37 years of age made up nearly half of the wealthy art buyers who regularly spent $1 million or more on an artwork over the past two years, the survey found, despite representing just over a third of the high-net-worth individuals surveyed.

The results of the survey, which was conducted in Britain, Germany, Japan, Singapore and Hong Kong, offered a silver lining for the art world as geopolitical and economic worries have weighed on overall sentiment.

As millennials grow into greater wealth, and benefit from a generational shift in wealth inherited from aging parents, their wealth could reach $24 trillion by 2020, according to Deloitte.

Millennials’ spending habits could provide significant potential for both online sales and art’s squeezed middle, Jervøe said, benefiting the industry’s overall health.

This younger generation of collectors with over $1 million in household assets to spend or invest helped buoy the digital art marketplace to $6 billion sales last year.

And a majority of them also took to photo-sharing social media platform Instagram to source and buy art.

Between 2016 and 2018, 93 percent of the millennials made purchases online, spending $106,930 on average, while the slightly older Generation X – between 38 and 52 years of age -spent around half a million dollars on an average web purchase, but did so with less frequency.

Reporting by Brenna Hughes Neghaiwi; Editing by Mark Heinrich

Source link

How to Be More Creative and Productive in Your Job

How to Be More Creative and Productive in Your Job

How many content marketers struggle
to be prolific all year round?

Not to mention consistently creative?

“I’ve found that many content marketers misunderstand how to be prolific. So, my latest hand-drawn infographic explains how to build the right habits to boost both creativity and
productivity,” explains Henneke Duistermaat of Enchanting Marketing

Sustained creativity becomes possible when it’s nurtured into becoming a habit, at which point it can be consistent—and therefore prolific.


Check out the infographic to learn how to nurture three essential creative habits that will make you the creatively prolific marketer you’d like to be:

Source link

How data visualization enhances business decision making

How data visualization enhances business decision making

Data visualization is the art of our age.

Just as Michelangelo approached that giant block of Carrara marble and said, “I saw the angel in the marble and carved it until I set him free,” analysts are approaching data with the same visionary and inquisitive mind.

In today’s age, where big data reigns, the art of data analysis is making sense of our world.

Analysts are chiseling the bulk of raw data to create meaning—patterns and associations, maps and models— to help us draw insights, understand trends, and even make decisions from the stories the data tell.

Data visualization is the graphical display of abstract information for two purposes: sense-making (also called data analysis) and communication. Important stories live in our data and data visualization is a powerful means to discover and understand these stories, and then to present them to others.

But in presenting such complex information, data analysis is not easily computable to the human brain until it is presented in a data visualization.

Tables, charts, and graphs provide powerful representations of numerous data points so that the insights and trends are easily understood by the human brain.

That’s why data visualization is one of the most persuasive techniques to evangelize experimentation today—particularly in an era of ever-decreasing attention spans.

On a slide. On a dashboard in Google Data Studio. Or simply something you plan to sketch on a whiteboard. This presentation of the data will decide if your trends and insights are understood, accepted and inferences drawn as to what action should be taken.

A thoughtfully crafted visualization conveys an abundance of complex information using relatively little space and by leveraging our visual system—whether that’s the optimal number of lead generation form fields or the potential ROI of your program throughout the quarter.

In this post, we dig into the practice of designing data visualizations for your audience. You will learn:

  • How your data visualizations can enhance the Executive decision-making process, using the guidelines of the Cynefin Framework
  • Why data visualizations are the most powerful way for the human brain to compute complex information through dual processing theory
  • What makes data visualizations effective using the five qualities defined by expert Alberto Cairo
  • And a real-world example of how you can problem-solve a problem to result in the most effective data visualization for your audience.

The Brain (Or, why we need data visualization)

You may be familiar with System 1 and System 2 thinking, known as dual processing theory. System 1 (or Type 1) is the predominant fast, instinctual decision-making and System 2 (Type 2) is the slow, rational decision-making.

DualProcessTheory

Dual Process Theory

Dual Process Theory categorizes human thinking into two types or systems.

Share the insight:

We often relegate System 1 thinking to your audience’s emotions. (We talked about it in “Evangelizing experimentation: A strategy for scaling your test and learn culture” or in “I feel, therefore I buy: How your users make buying decisions.”)

But that immediate grasp over complex information in a data visualization is also related to System 1 thinking.

A large part of our brain is dedicated to visual processing. It’s instinctual. It’s immediate.

If you have a strong data visualization, every sighted person can understand the information at hand. A seemingly simple 5×5 chart can provide a snapshot of thousands of data points.

In other words, visualizing data with preattentive features in mind is akin to designing ergonomic objects: You know that a sofa is made for sitting. You know that a handle on a coffee mug is designed for your hand. (This is called preattentive processing.)

Preattentive processing occurs before conscious attention. Preattentive features are processed very quickly…within around 10 milliseconds.

When creating data visualizations, you are designing for human physiology. Any other method of translating that information is a disservice to your message and your audience.

When we consider the speed of which people understand the multiple data points in a problem through dual processing theory and preattentive processing, it’s almost foolish not to take advantage of data visualization.

When you design data visualizations, you are understanding your audience.

Understanding how Executives make decisions

A data visualization is a display of data designed to enable analysis, exploration, and discovery. Data visualizations aren’t intended mainly to convey messages that are predefined by their designers. Instead they are often conceived as tools that let people extract their own conclusions from the data.

Data analysis allows Executives to weigh the alternatives of different outcomes of their decisions.

And data visualizations can be the most powerful tool in your arsenal, because your audience can see thousands of data points on a simple chart.

Your data visualization allows your audience to gauge (in seconds!) a more complete picture so they can make sense of the story the data tell.

In Jeanne Moore’s article “Data Visualization in Support of Executive Decision Making,” the author explored the nature of strategic decision making through the Cynefin framework.

The Cynefin Framework

The Cynefin Framework aids Executives in determining how to best respond to situations by categorizing them in five domains: Simple, Complicated, Complex, Chaotic and Disorder. Source: HBR’s A Leader’s Framework for Decision Making

Share the insight:

The Cynefin Framework

The Cynefin Framework (pronounced ku-nev-in) allows business leaders to categorize issues into five domains, based on the ability to predetermine the cause and effect of their decisions.

Created by David Snowden in 1999 when he worked for IBM Global Services, the Cynefin framework has since informed leadership decision making at countless organizations.

The five domains of the Cynefin Framework are:

  • In the Simple Domain, there is a clear cause and effect. The results of the decision are easy to predict and can be based on processes, best practices, or historical knowledge. Leaders must sense, categorize and respond to issues.
  • In the Complicated Domain, multiple answers exist. Though there is a relationship between cause and effect, it may not be clear at first (think known unknowns). Experts sense the situation, analyze it and respond to the situation.
  • In the Complex Domain, decisions can be clarified by emerging patterns. That’s because issues in this domain are susceptible to the unknown unknowns of the business landscape. Leaders must act, sense and respond.
  • In the Chaotic Domain, leaders must act to establish order to a chaotic situation (an organizational crisis!), and the further gauge where stability exists and doesn’t exist to get a handle on the situation and move it into the complex or complicated domain.
  • And in the Disorder Domain, the situation cannot be categorized in any of the four domains. It is utterly an unknown territory. Leaders can analyze the situation and categorize different parts of the problem into the other four domains.

In organizations, decision making is often related to the Complex Domain because business leaders are challenged to act in situations that are seemingly unclear or even unpredictable.

Leaders who try to impose order in a complex context will fail, but those who set the stage, step back a bit, allow patterns to emerge, and determine which ones are desirable will succeed. They will discern opportunities for innovation, creativity, and new business models.

David J. Snowden and Mary E. Boone

Poor quarterly results, management shifts, and even a merger—these Complex Domain scenarios are unpredictable, with several methods of responding, according to David J. Snowden and Mary E. Boone.

In other words, Executives need to test and learn to gather data on how to best proceed.

Leaders who don’t recognize that a complex domain requires a more experimental mode of management may become impatient when they don’t seem to be achieving the results they were aiming for. They may also find it difficult to tolerate failure, which is an essential aspect of experimental understanding,” explains David J. Snowden and Mary E. Boone.

Probing and sensing the scenario to determine a course of action can be assisted by data analyst to understand collaboratively the historical and current information at hand—in order to guide the next course of action.

An organization should take little interest in evaluating — and even less in justifying — past decisions. The totality of its interest should rest with how its data can inform its understanding of what is likely to happen in the future.

Of course, there is always the threat of oversimplifying issues, treating scenarios like they have easy answers.

But even with situations in the other domains of the Cynefin Framework, data visualization can provide insight into next steps—if they meet certain criteria.

What makes an effective data visualization

The presenter of the visualization must also provide a guiding force to assist the executive in reaching a final decision, but not actually formulate the decision for the executive.

With data visualization, there will always be insightful examples and examples that clearly missed the mark.

Avinash Kaushik, in his Occam Razor’s article, “Closing Data’s Last-Mile Gap: Visualizing For Impact!” called the ability for data visualizations to influence the Executive’s decision-making process closing the “last-mile” gap.

It can take an incredible effort to gather, sort, analyze and glean insights and trends from your data. If your analysis is solid, if your insights and trends are enlightening, you don’t want to muddle your audience with a confusing data visualization.

Remember: a data visualization is only as impactful as its design is on your audience.

In terms of the value in data visualization, it must provide simplicity, clarity, intuitiveness, insightfulness, gap, pattern and trending capability in a collaboration enabling manner, supporting the requirements and decision objectives of the executive.

Alberto Cairo’s Five Qualities of Great Data Visualizations

Alberto Cairo, author of “The Truthful Art: Data, Charts, and Maps for Communication,” outlines five qualities of great data visualizations. Your data visualization should be:

  1. Truthful: It should be based on thorough and objective research—just as a journalist is expected to represent the truth to the best of their abilities, so too is the data analyst.
  2. Functional: It should be accurate and allow your audience to act upon your information. For instance, they can perceive the incremental gains of your experimentation program over time in a sloping trendline.
  3. Beautiful: It needs to be well-designed. It needs to draw in your audience’s attention through an aesthetically pleasing display of information.
  4. Insightful: It needs to provide evidence that would be difficult to see otherwise. Trends, insights, and inferences must be drawn by the audience, in collaboration with the data analyst.
  5. Enlightening: It needs to illuminate your evidence. It needs to enlighten your audience with your information in a way that is easy to understand.

When you nail down all five of these criteria, your data visualization can shift your audience’s ways of thinking.

It can lead to those moments of clarity on what action to take next.

So, how are these design decisions made in data visualization?

Here’s an example.

Free questionnaire

Improve your data visualization’s impact!

Know where and how to improve your data visualization’s ability to tell a story to your audience.

How we make decisions about data visualization: An example in process

A note on framing: While the chart and data discussed below are real, the framing is artificial to protect confidentiality. The premise of this analysis is that we can generate better experiment ideas and prioritize future experiments by effectively communicating the insights available in the data.

Lead generation forms.

You probably come across these all the time in your web searches. Some forms have multiple fields and others have few—maybe enough for your name and email.

Suppose you manage thousands of sites, each with their own lead generation form—some long and some short. And you want to determine how many of fields you should require from your prospects.

If you require too many form fields, you’ll lose conversions; too few, and you’ll lose information to qualify those prospects.

It’s a tricky situation to balance.

Like all fun data challenges, it’s best to pare the problem down into smaller, manageable questions. In this case, the first question you should explore is the relationship between the number of required fields and the conversion rate. The question is:

How do conversion rates change when we vary the number of required fields?

Unlike lead quality—which can be harder to measure and is appraised much further down the funnel—analyzing the relationship between the number of required fields and the number of submissions is relatively straightforward with the right data in hand. (Cajoling the analytics suite to provide that data can be an interesting exercise in itself—some will not do so willingly.)

So, you query your analytics suite, and (assuming all goes well), you get back this summary table:

WiderFunnel Data Visualization Examples
On this table, how immediately do you register the differences between the average conversion rates? Note how you process the information—it’s System 2 thinking.

What’s the most effective way to convey the message in this data?

Most of you probably glossed over the table, and truth be told, I don’t blame you—it’s borderline rude to expect anyone to try to make sense of these many variables and numbers.

However, if you spend half a minute or so analyzing the table, you will make sense of what’s going on.

In this table format, you are processing the information using System 2 thinking—the cognitive way of understanding the data at hand.

On the other hand, note how immediate your understanding with a simple data visualization…

The bar graph

WiderFunnel Data Visualization Examples Bar Graph
Compared to the table above, the decrease in conversion rate between one and four required fields is immediately obvious, as is the upward trend after four. Your quick processing of these differences is System 1 thinking.

In terms of grasping the relationship in the data, it was pretty effective for a rough-and-ready chart.

In less than a second, you were able to see that conversion rates go down as you increase the number of required fields—but only until you hit four required fields. At this point, average conversion rates (intriguingly!) start to increase.

But you can do better…

For a good data visualization, you want to gracefully straddle the line between complexity and understanding:

How can we add layers of information and aesthetics that enrich the data visualization, without compromising understanding?

No matter how clever the choice of the information, and no matter how technologically impressive the encoding, a visualization fails if the decoding fails.

Adding layers of information can’t be at the expense of your message—rather, it has to be in service of that message and your audience. So, when you add anything to the chart above, the key question to keep in mind is:

Will this support or undermine making informed business decisions?

In this case, you can have some fun by going through a few iterations of the chart, to see if any visualization works better than the bar chart.

The dot plot

Compared to a bar chart, a dot plot encodes the same information, while using fewer pixels (which lowers visual load), and unshackles you from a y-axis starting at zero (which is sometimes controversial, according to this Junk Charts article and this Stephanie Evergreen article).

In the context of digital experimentation, not starting the y-axis at zero generally makes sense because even small differences between conversion rates often translate into significant business impact (depending on number of visitors, the monetary / lifetime value of each conversion, etc.).

In other words, you should design your visualization to make apparent small differences in conversion rates because these differences are meaningful—in this sense, you’re using the visualization like researchers use a microscope.

If you are still not convinced, an even better idea (especially for an internal presentation) would be to map conversion rate differences to revenue—in that case, these small differences would be amplified by your site’s traffic and conversion goal’s monetary value, which would make trends easier to spot even if you start at 0.

Either way, as long as the dots are distant enough, large enough to stand out but small enough to not overlap along any axis, reading the chart isn’t significantly affected.

WiderFunnel Data Visualization Examples Dot Plot
Compared to the bar chart, the dot plot lowers the visual load, gives us flexibility with our y-axis (it does not start at 0), allowing us to emphasize the trend.

More importantly (spoiler alert!), our newly-found real estate (after changing from bars to dots) allows you to add layers of information without cluttering the data visualization.

One such layer is the data’s density (or distribution), represented by a density plot.

A density plot

A density plot uses the height of the curve to show roughly how many data points (what percentage of sites) require how many fields. In this case, the density plot adds the third column (“Percent of Sites”) from the table you saw earlier.

That makes it easy to see (once you understand how density plots work) how much stock to place in those averages.

For example, an average that is calculated on a small number of sites (say, less than 1% of the available data) is not as important or informative as an average that represents a greater number of sites.

So, if an average was calculated based on a mere ten sites, we would be more wary of drawing any inferences pertaining to that average.

WiderFunnel Data Visualization Example Plot Graph Density Plot
After adding the density plot, you can see that most sites require two fields, roughly the same require one and three, and after eight required fields, the distribution is pretty much flat—meaning that we don’t have many data points. So, those incredibly high conversion rates (relative to the rest) are fairly noisy and unrepresentative—something we’ll verify with confidence intervals later on.

Visualizing uncertainty and confidence intervals

When we add the density plot, we see that most of our data comes from sites that require between one and four fields (80%, if you added the percentages in the table), the next big chunk (19%) come from sites that require five to nine fields, and the remaining 1% (represented by the flat sections of the density curve) require more than nine. (The 80/20 rule strikes again!)

Another useful layer of information is the confidence interval for these averages. Given the underlying data (and how few data points go into some averages), how can we represent our confidence (or uncertainty) surrounding each average?

Explaining Confidence Intervals

If you’ve never encountered confidence intervals before, here’s a quick example to explain the intuition behind them…

Let’s say you’re taking a friend camping for three days, and you want to give them enough information so they can pack appropriately.

You check the forecast and see lows of 70°F, highs of 73°F, and an average of 72°F.

So, when you tell your friend “it’s going to be about 72°F“—you’re fairly confident that you’ve given them enough information to enjoy the trip (in terms of packing and preparing for the weather, of course).

On the other hand, suppose you’re camping in a desert that’s expecting lows of 43 °F, highs of 100°F, and (uh oh) an average of 72°F.

Assuming you want this person to travel with you again, you probably wouldn’t say, “it’s going to be about 72°F.” The information you provided would not support them in making an informed decision about what to bring.

That’s the idea behind confidence intervals: they represent uncertainty surrounding the average, given the range of the data, thereby supporting better decisions.

Visually, confidence intervals are represented as lines (error bars) that extend from the point estimate to the upper and lower bounds of our estimate: the longer the lines, the wider our interval, the more variability around the average.

When the data are spread out, confidence intervals are wider, and our point estimate is less representative of the individual points.

Conversely, when the data are closer together, confidence intervals are narrower, and the point estimate is more representative of the individual points.

WiderFunnel Data Visualization Examples Dot Plot Confidence Intervals
Once you add error bars, you can see that many of those enticingly high conversion rates are muddled by uncertainty: at twelve required fields, the conversion rate ranges from less than 10% to more than 17%! Though less extreme, a similar concern holds for data points at ten and eleven required fields. What’s happening at thirteen, though?

At this point, there are two things to note: first, when you look at this chart, your attention will most likely be drawn to the points with the widest confidence intervals.

That is, the noisiest estimates (the ones with fewer data points and / or more variability) take up the most real estate and command the most attention.

Obviously, this is not ideal—you want to draw attention to the more robust and informative estimates: those with lots of data and narrower intervals.

Second, the absence of a confidence interval around thirteen required fields means that either there’s only one data point (which is likely the case, given the density curve we saw earlier), or all the points have the same average conversion rate (not very likely).

Luckily, both issues have the same solution: cut them out.

How to best handle outliers is a lively topic—especially since removing outliers can be abused to contort the data to fit our desired outcomes. In this case, however, there are several good reasons to do so.

The first two reasons have already been mentioned—these outliers come from less than 1% of our entire data set: so, despite removing them, we are still representing 99% of our data.

Second, they are not very reliable or representative, as evidenced by the density curve and the error bars.

Finally, and more importantly—we are not distorting the pattern in the data: we’re still showing the unexpected increase in the average conversion rate beyond four required fields.

We are doing so, however, using the more reliable data points, without giving undue attention to the lower quality ones.

Lastly, to visualize and quantify our answer to the question that sparked the whole analysis (how do conversion rates change when we vary the number of required fields?), we can add two simple linear regressions: the first going from one to four required fields, the second from four to nine required fields.

Why two, instead of the usual one?

Because we saw from the density chart discussion that 80% of our data comes from sites requiring one to four fields, a subset that shows a strong downward trend.

Given the strength of that trend, and that it spans the bulk of our data, it’s worth quantifying and understanding, rather than diluting it with the upward trend from the other 20%.

That remaining 20%, then, warrants a deeper analysis: what’s going on there—why are conversion rates increasing?

The answer to that will not be covered in this article, but here’s something to consider: could there be qualitative differences between sites, beyond four required fields? Either way, the regression lines make the trends in the data clearer to spot.

WiderFunnel Data Visualization Examples Dot Plot Regression Line
The regression lines draw attention to the core trend in the data, while also yielding a rough estimate of how much conversion rates decrease with the increase in required fields.

After adding the regression line, you summarize the main take-away with a nice, succinct subtitle:

Increasing the number of Required Fields from one to four decreases average conversion rate by 1.2% per additional field, for 80% of sites.

This caption helps orient anyone looking at the chart for the first time—especially since we’ve added several elements to provide more context.

Note how the one statement spans the three main layers of information we’ve visualized:

  1. The average conversion rate (as point estimates)
  2. The distribution of the data (the density curve)
  3. The observed trend

Thus, we’ve taken a solid first pass at answering the question:

How do conversion rates change when we vary the number of required fields?

Does this mean that all sites in that 80% will lose ~1% conversion rate for every required field after the first?

Of course not.

As mentioned in the opening section, this is the simplest question that’ll provide some insight into the problem at hand. The lowest-hanging fruit, if you will.

However, it is far from a complete answer.

You’ve gently bumped into the natural limitation of bivariate analyses (an analysis with only two variables involved): you’re only looking at the change in conversion rate through the lens of the number of required fields, when there are obviously more variables at play (the type of site, the client base, etc.).

Before making any business decisions, you would need a deeper dive into those other variables, (ideally!) incorporate lead quality metrics, to have a better understanding of how the number of required fields impacts total revenue.

And this is where you come back full circle to experimentation: you can use this initial incursion to start formulating and prioritizing better experiment ideas.

For example, a successful experimentation strategy in this context would have to, first, better understand the two groups of sites discussed earlier: those in the 80% and those in the other 20%.

Additionally, more specific tests (i.e., those targeting sub-domains) would have to consider whether a site belongs to the first group (where conversion rates decrease as the number of required fields increase) or the second group (where the inverse happens)—and why.

Then, we can look at which variables might explain this difference, and what values these variables take for that site.

For example, are sites in the first group B2C or B2B? Do they sell more or less expensive goods? Do they serve different or overlapping geographic regions?

In short, you’ve used data visualization to illuminate a crucial relationship to stakeholders, and to identify knowledge gaps when considering customer behaviour across a range of sites.

Addressing these gaps would yield even more valuable insights in the iterative process of data analysis.

And these insights, in turn, can guide the experimentation process and improve business outcomes.

Your audience needs to trust your data visualization—and you.

When your experimentation team and Executives can get into the boardroom together, it’s disruptive to your business. It shakes your organization from the status quo, because it introduces new ways of making decisions.

Data-driven decisions are proven to be more effective.

In fact, The Sloan School of Business surveyed 179 large publicly traded firms and found that those that used data to inform their decisions increased productivity and output by 5-6%.

And data analysts have the power to make decision-making among Executive teams more informed.

Relying not on the Executive’s ability to rationalize through the five domains of the Cynefin Framework, data visualization presents the true power of experimentation. And the ability for experimentation to solve real business problems.

But like any working dynamic, you need to foster trust—especially when you are communicating the insights and trends of data. You need to appear objective and informed.

You need to guide your audience through the avenues of action that are made clear by your analysis.

Of course, you can do this through speech. But you can also do this through the design of your data visualizations.

WiderFunnel Data Visualization Examples Dashboard
Data visualizations help your Executive team keep a pulse on what is happening in your experimentation program and allow them to understand how it can impact internal decision making.

Whether you are presenting them in a dashboard where your team can keep a pulse on what’s happening with your experimentation program, or if it’s a simple bar graph or dot plot in your slide deck, your data visualizations matter.

Clear labeling and captions, graphic elements that showcase your data dimensions, lowering visual load, and even using color to distinguish elements in your data visualization—these help your audience see what possibilities exist.

They help your audience identify patterns and associations—and even ask questions that can be further validated through experimentation.

Because experimentation takes the guesswork out of decision making. Your data visualizations make it easier for the Executive to navigate the complexity of situations they are challenged today.

And that is, ultimately, the most persuasive way to evangelize experimentation at your organization.

How impactful have you found strong data visualizations on your team’s decision-making process? We’d love to hear about it.

Author

Wilfredo Contreras

Senior Data Analyst

Contributors

Lindsay Kwan

Marketing Communications Specialist

Benchmark your experimentation maturity with our new 7-minute maturity assessment and get proven strategies to develop an insight-driving growth machine.

Get started

Source link

Oil majors strut into Houston for annual energy conference

Oil majors strut into Houston for annual energy conference

(Reuters) – The oil industry converges this week on Houston at CERAWeek, the largest gathering of top energy executives in the Americas, with oil majors showing a bigger presence as the United States has taken the crown as the largest crude producer in the world.

FILE PHOTO – A combination of file photos shows the logos of five of the largest publicly traded oil companies; BP, Chevron, Exxon Mobil, Royal Dutch Shell, and Total. REUTERS/File Photo

After a year that saw international crude oil prices surge to more than $87 a barrel in the fall then tumble, the market has been calmer of late, even with production limitations imposed by a combination of OPEC’s output cuts and large-scale sanctions placed on Iran and Venezuela by the United States.

U.S. crude output has rocketed to more than 12 million barrels a day, surpassing former leaders Russia and Saudi Arabia, but that success comes as independent U.S. shale companies are reducing drilling under pressure from investors demanding improved returns.

Even with prices at relatively stable levels, U.S. sanctions on Iran and Venezuela could disrupt the current calm. It remains unclear whether the United States will continue to offer some Iranian oil buyers purchase waivers, and whether Venezuela’s President Nicolas Maduro will face additional sanctions.

Both U.S. Secretary of State Mike Pompeo and Energy Secretary Rick Perry will speak at the conference.

The larger presence of the majors, including U.S. companies Exxon Mobil and Chevron, comes as those firms are shifting investments to shale in west Texas and New Mexico, and connecting those oil fields to their coastal refineries and chemical plants.

“It’s a little bit different than what’s been seen historically,” said Staale Gjervik, president of Exxon’s shale business. Its shale deliberations now including asking, “What does that mean for the folks downstream and on the Gulf Coast and vice versa?” Gjervik said.

Shale wells are cheaper to drill and faster to start production, offsetting the majors’ past focus on giant fields whose payoff can be decades into the future.

In addition to bringing new wells into production, Royal Dutch Shell PLC is building an inventory of shale wells it can tap on a flexible schedule, said Amir Gerges, head of Shell’s Permian operations. “If we find surplus cash at the end of the year, or if oil prices respond quickly in a certain year, we can easily reinvest that for near term cash flow,” Gerges said.

Shale has sent U.S. exports ballooning to more than 3 million barrels of crude a day, upending global supply.

“It reflects the rebalancing that has gone on in world oil,” said Daniel Yergin, vice chairman of organizer IHS Markit. “This is the first CERAWeek ever where the world’s largest producer is the country where we are holding the conference, which is the United States.”

Chevron CEO Michael Wirth is scheduled to speak, along with several Exxon executives and BP Plc Chief Executive Bob Dudley.

Saudi Arabia is notable for its diminished presence this year. Saudi Aramco, the state-run oil company, is holding its annual board meeting this week, and Saudi officials noted they were prominently featured at London’s recent International Petroleum Week conference.

However, CERAWeek also follow a period where the Kingdom has faced more U.S. pressure to keep oil prices low, threats of antitrust legislation currently moving through Congress, and anger at the killing of journalist Jamal Khashoggi last year.

Saudi Arabia is the leading producer among the Organization of the Petroleum Exporting Countries, whose Secretary General, Mohammad Barkindo, is attending the conference, along with representatives from the United Arab Emirates. In recent years OPEC representatives have held meetings with executives from U.S. shale companies, in an effort to better understand shale and as the rhetoric from state-run producers has shifted from its adversarial approach in the past.

However, shale execs in the past have shied away from publicity surrounding such get-togethers, including a dinner at one of Houston’s fanciest restaurants last year. Those execs are wary of being viewed in collaboration with OPEC, and this year’s conference features fewer presentations from shale companies as well. OPEC officials have said they do plan on meeting with shale executives at this year’s conference.

Both U.S. industry groups and OPEC nations oppose the ‘NOPEC’ legislation, which has passed committees in both U.S. houses of Congress, seeing it as a threat to production that could cause prices to rise.

“I don’t think they can work together,” Yergin said. “But without some stabilizing mechanisms in the oil market you’d have a lot more volatility, and if you had a lot more volatility you’d have a lot less investment.”

Reporting By Jennifer Hiller in Houston, Ron Bousso in London, Rania El Gamal in Dubai and David Gaffen in New York; Editing by Diane Craft

Source link

Facebook lost 15 million users? Marketers remain unfazed

Facebook lost 15 million users? Marketers remain unfazed

Source: Edison Research 2019

Facebook has lost an estimated 15 million users in the U.S. during the last two years, according to a report from Edison Research. The firm’s “The Infinite Dial 2019,” surveyed 1,500 U.S. citizens age 12-years and older and found that Facebook usage overall has dropped from 67 percent to 61 percent in two year’s time, with the 12 to 34-years age segment down from 82 million in 2017 to 65 million this year.

Twitter usage is also on a downward slope, going from 23 percent to 19 percent between 2017 and 2019. The report found, overall, social media use has stagnated since 2016, with the number of respondents claiming to be on social remaining around 77 to 80 percent for the past three years. But social media marketing experts said they’re not seeing any impact from declining social media use.

People aren’t leaving social, just shifting platforms. While the report showed Facebook users numbers were dropping, Facebook-owned Instagram is experiencing a steady rise. Instagram’s audience reach is still much smaller than Facebook’s, but Edison’s survey found Instagram usage has climbed from 34 percent in 2017 to 39 percent.

Steve Weiss, CEO of digital marketing agency MuteSix, said he doesn’t believe Facebook users are leaving, but instead the current user base is simply aging.

“While the younger demographics may be shifting to Instagram and Snapchat, Facebook is also seeing increased gains from the 55+ segment. In other words, users aren’t exactly leaving, they are simply shifting,” said Weiss. According to Edison’s report, the 55+ age segment of respondents were the only group whose Facebook usage had increase since 2017.

Yuval Ben-Itzhak, CEO of social media marketing platform Socialbakers, said his company doesn’t have any data indicating a drop in Facebook user numbers.

“In Q4, Facebook reported an increase,” said Ben-Itzhak, “As for Twitter, they have stopped reporting on users which could mean users did not grow.”

Ben Heiser, a content strategist for the Drum Agency, agrees with Weiss in terms of Facebook versus Instagram user numbers.

“Everyone likes to take shots at Facebook usage being down as the end of social media as we know it, but social media is more of a media channel now than ever before. Facebook has global saturation at this point and active users are just switching over to Instagram, which Facebook owns,” said Heiser.

“Here’s the thing – surveys are always skewing perception towards something. One survey cites the 15 million loss in young Facebook users as their rallying cry to jump to audio. However, if you look at the actual numbers, not survey results, Facebook reported that there were 1.52 billion daily active users in Q4, an increase of nine-percent year-over-year.”

Facebook advertising still delivering. Weiss said his ad agency is not seeing an impact from any drops in usage. He believes the Facebook’s Stories ad product, which the company rolled out in the News Feed last year, will drive more advertising on the platform.

“With Facebook’s continued investment in analytics across all its platforms, we also expect to see advanced engagement metrics for Stories, which will ultimately boost bottom-line revenue for Facebook,” said Weiss.

He is confident Instagram advertising revenue will continue to climb, also spurred by Story ads.

“As a matter of fact, advertising revenues on Instagram’s Stories are projected to increase as early studies demonstrate brands can expect to see higher ad recall and click-through rates than from previous ads that were posted on the Instagram feed,” said Weiss.

And then there’s WhatsApp. Edison Research’s survey did not include historical usage data on WhatsApp, the encrypted messaging app owned by platform, but did show 23 percent of the survey participants age 12 to 34 years old were using it this year.

“While Facebook doesn’t report on individual app growth regularly, you can easily infer that the growth is happening on WhatsApp and Instagram, which cater and are heavily used by a younger target,” said Heiser.

Facebook has been making subtle moves to get more people on WhatsApp — announcing in January that it plans to integrate its WhatsApp, Instagram and Facebook Messenger platforms, making it possible for users to communicate between the three apps. Facebook also said it is planning to roll-out WhatsApp ads this year.

Why you should care. While the Edison Research may show declining user numbers, a loss of 15 million users on Facebook may not amount to much for a platform that has 1.5 billion daily active users. And recent reports show advertisers are ahead of the curve with Instagram, with lifts in incremental ad spending from loyal advertisers driving ad spend growth on the platform — money that is still going into Facebook’s pocket.

As Heiser noted, Facebook’s most recent earnings report isn’t showing any major impact from a drop in usage with ad revenue climbing to $16.6 billion during the fourth quarter of 2018. In fact, the company reported the average price of an ad decreased 2 percent, while ad impressions were up 34 percent. Twitter also saw increased ad revenue during the last quarter of 2018, up 23 percent year-over-year.

Overall, marketers shouldn’t make any knee-jerk reactions when it comes to their social media strategy based on one survey, marketers say.

“If you’re planning to use media, you shouldn’t be a fan of one or the other. The focus should be where your audience is most engaged: is it podcast? Instagram? Facebook? If so, that’s where you need to be,” said Heiser.


About The Author

Amy Gesenhues is Third Door Media’s General Assignment Reporter, covering the latest news and updates for Marketing Land and Search Engine Land. From 2009 to 2012, she was an award-winning syndicated columnist for a number of daily newspapers from New York to Texas. With more than ten years of marketing management experience, she has contributed to a variety of traditional and online publications, including MarketingProfs.com, SoftwareCEO.com, and Sales and Marketing Management Magazine. Read more of Amy’s articles.

Source link

How retailers can survive Amazon’s stronghold in Google search

How retailers can survive Amazon's stronghold in Google search

Retail marketers can’t out-Amazon on the paid Google SERP, but they can find white space.

Among the metrics that can help is impression share. It’s in the Google Auction Insights report for shopping and paid search campaigns.

Impression share is the percentage of impressions your ads received divided by the estimated number of impressions the ads were eligible to receive. Google determines eligibility based on a number of factors, including targeting settings, approval statuses and quality.

On the surface, impression share can help you understand whether your ads might reach more shoppers if you increase your bids or budget.

But a smarter way to use impression share is for gaining context into how your advertising environment is shifting. Evaluate it alongside other performance and competitive metrics. From there, use those insights to identify how to adapt your campaigns and bidding strategy to the changing competitive pressure.

Let’s take a look at the latest data and examples for how to go about it.

Amazon’s impression share in Google Shopping

We analyzed Google Auction Insights reports for a leading retailer in five verticals. These retailers all see Amazon as a regular competitor in Google Shopping and Google paid search.

The following chart shows the share of impressions Amazon has garnered over the last two years for Google Shopping auctions in which both the retailer and Amazon were eligible to serve an ad.

From this chart we can make a few observations. One is that Amazon’s impression share tended to increase as each year progressed, reaching a peak just before or during each holiday shopping period, and dipping sharply during Q2 2018 when Amazon briefly paused its shopping campaigns.

We can also see that Amazon’s share of impressions for categories such as office supplies and home improvement was consistently higher than its share for sporting goods or apparel.

Why the difference between verticals? In part it’s a reflection of each retailer’s search query universe and how much it overlaps that of Amazon. The home improvement and office supplies retailers likely share more of Amazon’s search query universe.

By contrast, a retailer who sells a lot of, say, North Face and Nike products might not see much competition from Amazon, because those brands are not available on Amazon. When consumers search using North Face- or Nike-branded terms, for example, Amazon could possibly appear in search results with ads for similar products. Still, Amazon would have a much lower impression on those items because of their lower relevance.

Ramping up apparel

Take a closer look below at Amazon’s impression share within the apparel category on Google Shopping over the past several months.

One takeaway here is that the hockey-stick growth aligns with Amazon’s private label surge. The company introduced seven new private label brands and over 150 Amazon-exclusive brands in Q4 2018, according to the TJI Amazon Brand Database. Amazon’s largest brand portfolio? Apparel and accessories, with over 80 private label and exclusive brands in the U.S.

Amazon’s impact in paid search vs. shopping campaigns

Looking at the same retailers in Google paid search shows a slightly different set of results.

Amazon has long been active in paid search. While it continues to experiment and fine tune its Google Shopping strategy, the company has a more established and consistently growing presence in paid search, as this impression share data suggests.

An outlier, however, is Amazon’s heightened impression share within the office supplies category. That trend aligns with Amazon’s push in the office supplies market over the past few quarters.

For another view of the data, let’s isolate Amazon’s impression share for each vertical.





Compete with Amazon, not against it

The best way to respond to Amazon’s growth is not to panic. Look at your bottom line and determine what, if any, impact Amazon is having on your business. Impression share is a metric that shouldn’t directly drive strategy, but rather provide context around the advertiser competition in your market.

At the end of the day, keep Amazon’s impression share in perspective. Amazon is influential, but retailers that know their business and customers can be well-equipped to handle rising impression share from competitors. Here’s how.

Know how to interpret impression share

Impression share can you help you determine your biggest competitors on Google, and how that landscape is changing. While you probably know your competitors overall for your business, that composition might differ in Google’s shopping and paid search channels. For instance, retailers that devote most of their digital marketing budget to Google Shopping could create strong competition for you on that channel, while creating little competition elsewhere. Use impression share to uncover new entrants or established competitors who are being more or less aggressive with their bids. Say your CPCs suddenly rise. Examine impression share to see whether a competitor’s heightened spending is a factor.

Understand a healthy impression share for your business

Your business, competitive landscape, and return goals determine an ideal impression share. If you’re up against deep-pocketed competitors like Amazon, an impression share of 10% might be healthy for your campaigns, as long as you’re driving revenue efficiently. If you’re achieving your revenue targets within your campaign’s return goals, there’s little concern about a few competitors outranking you.

Dig into click share, too

Click share is the percentage of clicks on your ads relative to the clicks they were eligible to receive. Analyze click share in combination with impression share to get a better sense of where your campaigns are weak and can improve. In paid search, if impression share is high but click share is low, your ads might be appearing for irrelevant queries. If the same situation is happening in Google Shopping, your products might be priced too high above the competition. Or, maybe competitors are showing promotions on their ads more often than you. Conversely, if impression share is low and click share is high, consider bidding more aggressively to increase impressions and earn even more clicks. Push products that have the best price for an easy win.

Use smarter segmentation

If you can’t simply increase budget as a response to competitors’ rising impression share, try this instead: Segment products into campaigns based on how much exposure you want those products to get. Increase bids in the campaigns containing the highest margin or best performing items. Or, create separate campaigns for branded and non-branded queries. In Sidecar’s 2018 Google Shopping Benchmarks report, we found that clicks from branded searches delivered 171 percent more ROI and a CTR four times higher than that of non-branded searches. Also, within Google Shopping, use negative keywords to filter queries and avoid wasting impressions on less relevant or low-performing terms.

Bring your mobile strategy up to date

Google Shopping hit a milestone in Q4 2018, according Sidecar’s research. For the first time ever, more than half of all Shopping conversions on occured on mobile devices. Google paid search wasn’t far behind with 44 percent of all conversions occuring on mobile in Q4. If exposure and brand awareness are among your goals for Google Shopping, you’ll get more bang for your buck on mobile where CPCs are cheaper and where Showcase ads are a factor. Those mobile impressions can lead to conversions on both mobile and desktop. Consider creating a separate campaign for mobile traffic if you haven’t yet. It will let you tune bids granularly to how your products perform on mobile.

Plan search and shopping campaigns cohesively

As the above charts show, metrics like impression share vary between shopping and paid search campaigns. You might find, for instance, that you face greater competition in paid search than Shopping. As a result, you might treat paid search as more of a bottom-of-the-funnel channel and focus spend on high-intent queries that have the greatest chance of converting. To complement that strategy, consider how you can fill the top of the funnel with Google Shopping—a channel where you already have an advantage in terms of exposure. You might be able to withstand bidding more aggressively on a greater swath of products to drive up impression share even more.

Evaluate a move to multi-touch attribution

Most retail marketers probably agree that last touch attribution is a fundamentally flawed approach in today’s omnichannel world. On the other hand, multi-touch attribution can empower you to measure performance across channels and gain an entirely new (and more accurate) view of your customers’ journey. While it’s certainly not a simple feat to shift attribution models, some retailers, like Moosejaw, are successfully making the move. The retail landscape is only becoming more competitive. A multi-touch model that aligns with your business and goals might be among the few, major ways you can uncover a new advantage to push shoppers through your marketing funnel.

By carefully coordinating shopping and paid search campaigns, you’re positioning yourself to achieve a full-funnel marketing approach. Put your customers first when devising any strategy for Google Ads, while keeping your competitors in view.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About The Author

Steve Costanza is the Senior Analytics Consultant of Enterprise Customer Strategy at Sidecar. He analyzes digital marketing performance and strategic direction for large retailers across verticals, focusing on data visualizations and advanced account segmentation. He is responsible for deriving meaning from numbers and determining how to use those insights to drive marketing decision making. Steve is especially close to Google’s new innovations impacting Shopping and paid search. He has a master’s degree in data analytics and contributes to Search Engine Land as well as Sidecar Discover, the publication by Sidecar that covers research and ideas shaping digital marketing in retail.

Source link

Ventura SEO

Ventura SEO Call 587 357 6900

If you’re not convinced that a large portion of your advertising budget should be going to local online search, take a look at these startling statistics about local search:
Get your Ventura California Business Found on Google today.
97% of internet users use the internet to shop, of which 57% characterize their behavior as shop online, purchase offline
90% of online business searches result in sale to offline bricks and mortar
82% of local searchers follow up offline via an in-store visit, phone call or purchase
80% of family budgets are spent within 50 miles of the home
74% of internet users perform local searches
73% of online activity is related to local content
66% of North America use online local search, like Google, Yahoo, or Bing local
61% of local searches result in purchases
74% of Americans have substituted the internet and local search for phone books
48% of all searches are loca

https://www.bobbymcintyre.com/81-ventura-seo.html

Sherwood Park SEO

Get Your Sherwood Park Business fount on Google
If you’re not convinced that a large portion of your advertising budget should be going to local online search, take a look at these startling statistics about local search:
Get your Sherwood Park Alberta Business Found on Google today.
97% of internet users use the internet to shop, of which 57% characterize their behavior as shop online, purchase offline
90% of online business searches result in sale to offline bricks and mortar
82% of local searchers follow up offline via an in-store visit, phone call or purchase
80% of family budgets are spent within 50 miles of the home
74% of internet users perform local searches
73% of online activity is related to local content
66% of North America use online local search, like Google, Yahoo, or Bing local
61% of local searches result in purchases
74% of Americans have substituted the internet and local search for phone books
48% of all se

https://www.bobbymcintyre.com/80-sherwood-park-seo.html