Creative has changed. Gone are the days when a brand could sink all of its heart, soul and budget into a single creative experience – say, a television ad. It’s 2019; you can’t be so singularly focused. Your audience sure isn’t.
You still need big ideas, like Dos Equis’ “Most Interesting Man in the World,” P&G’s “Thank you, Mom” campaign or Apple’s “Mac versus PC” concept. But today, brands need to consider how to translate those big ideas across channels and screens, especially when it comes to video so that they can tell a cohesive and engaging story.
Companies scrambled to secure the platforms and support they needed to transact advertising programmatically, but without effective creative, those tools don’t mean much. Technology and creative must go hand in hand. It is time for brands to consider the nuances of each format at every stage of campaign production, allowing them to create an effective video for today’s multi-screen world.
Begin by understanding your audience. Be willing to dig deeper and uncover things that challenge what you thought you knew. To do so, you will need to access and analyze data on an ongoing basis. Brands that use data smartly often discover surprising insights. Perhaps their audience has evolved over time, or they have a secondary customer base they had never thought of before.
In addition to the “who,” you need to understand the “where” and the “how” before you start shooting your creative. Is your mobile audience watching video while they commute to work, or are they “second-screening” while they watch TV? You will want your creative to reflect these different scenarios.
Quick Tip: With these insights in mind, plan to create customized creative experiences for each audience segment, screen and use case.
Creative is too often an afterthought in the digital world. Let’s use an automotive brand as an example. They put all this work into shooting a beautiful video ad that looks great on TV screens. Many months later, there is a digital media plan, and that beautiful TV spot needs to be repurposed for various digital screens, where user behavior can be quite different. The creative partner ends up cutting the original footage into shorter spots or letting the TV spot run as-is, which can lead to pretty bad user experiences. For example, a car driving in the distance against a beautiful background looks terrific on TV, but on mobile, that car looks like an ant. That brand should have shot with digital in mind from the beginning.
For digital, you need more close-ups and quick cuts. You also need to frontload the most important part of your ad, whether that is a brand tagline or an aspect of your product. If you can, always overshoot. You may not know all your strategies at the time you are shooting, so extra footage will come in handy as your media plan evolves.
Quick Tip: Consider how to tell your story without relying on audio. This may mean using subtitles, stronger visuals or more logos and product shots.
As you execute your campaign, stay agile and open to trying new tactics. Just be sure your creative reflects each strategy. Repurposing assets are possible but do so with care. For example, if you are repurposing footage for mobile, you may need to add overlays or interactive features. If your video was shot for landscape, you would need a creative partner who can edit it for vertical screen environments that many mobile viewers consume content in.
Quick Tip: When evaluating your campaign, think more broadly about what different metrics tell you about how successful your campaign has been at driving consumer behavior. Think beyond the standard KPIs like CTR and completion rates, and look at things like engaged-time spent and what happens post-click. What about the creative specifically drove these metrics, and how can you apply those learnings to current and future campaigns?
Brands have the data and tools they need to reach their audiences with unprecedented precision, but if the creative isn’t effectively speaking to them, that is all for naught. It is time to put the pieces together.
Opinions expressed in this article are those of the guest author and not necessarily Marketing Land. Staff authors are listed here.
About The Author
As Senior Creative Director, Les Seifer leads Tremor Video DSP’s in-house Creative Studio. His team brings creative video intelligence to media campaigns, combining leading-edge advanced creative with unique data and insights. His creative roots run deep outside the office. As a painter, his artwork has been displayed in galleries and museums across the country.
(Reuters) – Kraft Heinz Co said on Monday it would replace Chief Executive Officer Bernardo Hees with Anheuser-Busch InBev marketing chief Miguel Patricio, as one of the world’s largest packaged food companies looks to reinvigorate its brands after years of cutting costs dented their value.
FILE PHOTO: A Heinz Ketchup bottle sits between a box of Kraft macaroni and cheese and a bottle of Kraft Original Barbecue Sauce on a grocery store shelf in New York March 25, 2015. REUTERS/Brendan McDermid/File Photo
In February, the Heinz ketchup maker cut its dividend payouts, wrote down the value its marquee Kraft and Oscar Mayer brands and other assets by more than $15 billion and disclosed a regulatory probe into its accounting practices.
The broad sector has struggled with rising transportation and commodity costs along with a shift in consumer preferences to more niche health-focused brands.
The Velveeta cheese maker’s second biggest shareholder 3G Capital has pushed the company to rein in expenses to tackle higher costs and sluggish growth, a strategy it has used effectively at Heinz and Anheuser-Busch, another company in which it has a stake.
3G and Warren Buffett’s Berkshire Hathaway Inc together own more than 50 percent of Kraft Heinz.
“The change at the top of Kraft Heinz is a positive development,” said Roosevelt Investment Group fund manager Jason Benowitz, which previously held a stake in Kraft Heinz.
“It shows that management and the board understand the serious nature of the challenges facing the company. Kraft Heinz … cannot further cost cut its way to prosperity.”
The company’s shares rose about 1 percent in early trading, after more than more than halving in value since H.J. Heinz and Kraft Foods, two of the United States’ biggest food and beverage producers, merged in 2015.
Kraft Heinz has been the worst performing stock on the S&P 500 Packaged Foods and Meats index over the last year, falling some 43 percent.
Patricio takes over the top job in July after spending two decades at Anheuser-Busch, most recently as the Budweiser brewer’s global chief marketing officer.
Prior to AB InBev, Patricio worked at a range of major consumer goods producers including Philip Morris, Coca-Cola Co and Johnson & Johnson.
“By appointing Mr. Patricio as the new CEO, it appears that Kraft Heinz is doubling down on its efforts to reinvigorate the top line,” Bernstein analyst Alexia Howard wrote in a note.
His prior experience as the president of Asia Pacific of Anheuser-Busch InBev might enable him to explore more growth opportunities in emerging markets at Kraft Heinz, she said.
Reporting by Uday Sampath in Bengaluru; Editing by Shailesh Kuber and Saumyadeb Chakrabarty
Nearly three-fourths (72 percent) of smartphone owners are using digital assistants, according to a new report from Microsoft. The findings are based on two surveys – one from mid-2018 that includes an international sample, and a 2019 follow-up involving 5,000 U.S. consumers. The study also found that 35 percent of the survey population had used “voice search” through a smart speaker.
Google and Apple tied for usage lead. In terms of usage market share, the report found Siri and Google Assistant tied at 36 percent, followed by Alexa (25 percent) and then Cortana (19 percent). The overwhelming majority of Cortana’s usage is on the desktop. These figures are not the same as device share. Google Assistant is available on more than a billion devices and Amazon dominates the smart speaker hardware market.
Top assistant use cases. Like many reports covering digital assistants, this one sometimes fails to make clear distinctions between smart speakers and smartphone usage. However, the report spends considerable time discussing smart speaker adoption and use cases.
In the context of that smart speaker discussion, Microsoft presents the following hierarchy of digital assistant usage:
Searching for a quick fact — 68 percent
Asking for directions — 65 percent
Searching for a business — 47 percent
Researching a product or service — 44 percent
Making a shopping list — 39 percent
Comparing products or services — 31 percent
Adding items to a shopping cart — 26 percent
Making a purchase — 25 percent
Contacting customer service or support — 21 percent
Providing feedback for a product/service — 19 percent
Some of the answers on this list (e.g., comparing products or services) suggest that respondents were commenting broadly about assistant usage – not just smart speakers. Indeed, the absence of responses such as “checking the weather” or “playing music” (answers common in other smart speaker surveys) suggests this as well.
The study found that 80 percent were “satisfied” with their digital assistant experiences (most likely on smart speakers this time), while 14 percent were “neutral” and only 6 percent were dissatisfied.
22 percent jump in ownership. In terms of smart speaker ownership, the 2018 survey discovered 23 percent of respondents had one. That number has jumped to 45 percent this year. Under the assumption that this is a U.S.-based population, that would mean roughly 112 million Americans today own at least one smart speaker, with an additional 26 percent saying they’re going to purchase one this year.
A very interesting finding surrounds brand-purchase intent. Amazon Echo has gained compared with 2018 and Google Home has lost share of intent to purchase. The number of people who said they want to buy a Google Home speaker declined from 58 percent in 2018 to 17 percent this year. It’s possible that the 58 percent bought Google Home devices, hence the drop. But the decline is noteworthy.
The Google Home Mini didn’t suffer the same decline in purchase intent. Finally, 26 percent of the audience said that they were interested in buying an alternative brand, which may include Sonos and the Apple HomePod, although that’s not clear from the report.
Digital assistant privacy concerns. A substantial minority (41 percent) of respondents said they had “concerns” about digital assistants — again, probably smart speakers here. Asked to elaborate, the top response was “that my personal information is not secure” (52 percent), followed by “that it is actively listening and/or recording me” (41 percent) and then “I don’t want my personal information or data used” (36 percent). These fears are not entirely unfounded, given recent revelations about Amazon employees listening to Alexa recordings — justified to improve voice recognition and understanding.
The surveys also asked about shopping using a digital assistant or smart speaker. Just over 41 percent said they had made a purchase through one or both channels (with 6.5 percent saying they didn’t enjoy it). The other roughly 59 percent had not made a purchase, with 27 percent in that group saying they that they were interesting making future purchases using assistants. More than half (54 percent) of respondents said they believed that digital assistants will help them make retail purchases within 5 years.
Why we should care. Both consumers and retailers expect smart speakers (and smartphone assistants) to become an important purchase channel in the next few years. The activities detailed in the list above argue that some search behaviors will transfer to voice channels over time.
There are clear implications for marketers, tied to voice optimization and other tactics. For example, if you’re a local service business there are specific things that must be done to appear in Google Home local listings. It’s also incumbent upon marketers to experiment with smart speakers to determine the most effective use cases for their brands and content.
Finally, certain shopping and commerce experiences may become common through smart speakers. Walmart’s updated voice grocery shopping experience represents a potentially successful voice-commerce model, involving list creation and reordering.
About The Author
Greg Sterling is a Contributing Editor at Search Engine Land. He researches and writes about the connections between digital and offline commerce. He is also VP of Strategy and Insights for the Local Search Association. Follow him on Twitter or find him at Google+.
LONDON (Reuters) – The new managers of Columna Commodities Fund, a Luxembourg hedge fund which went into liquidation in early 2017, have said they are suing its former managers Alter Domus for $56 million in lost assets and fees.
Columna, launched in 2013, was a top-performing fund in a stable known as LFP I SICAV, managed by Luxembourg Fund Partners.
Alter Domus, a Luxembourg fund platform and administrator that has financial backing from private equity giant Permira, bought Luxembourg Fund Partners in December 2017, after Columna’s collapse, when LFP I SICAV’s assets under management totaled nearly 400 million euros ($450 million).
In a statement released earlier this week, the new directors said they had launched a claim to recover investment losses, management and performance fees from Alter Domus Management Company. LFP I SICAV’s assets under management now total around 80 million euros.
Columna made double-digit gains in 2014, 2015 and 2016 investing in a range of commodity products, according to information it sent its investors. But it then closed abruptly in December 2016 without returning any of its assets.
An Alter Domus spokesman said the firm was only aware of “significant issues” with Columna between 2013 and 2016 after buying Luxembourg Fund Partners.
The spokesman declined to comment on the legal claim.
In a previous email, he said Alter Domus was looking into the issues with Columna and had “engaged various external firms to assist with our investigation, the findings of which has led to the commencement of legal actions”. He declined to comment further on the legal actions, saying they were ongoing.
Permira declined to comment.
After being asked by Columna investors to help, asset recovery specialist David Mapley was one of three directors appointed to a new board of LFP I SICAV late last year and authorized by the Luxembourg regulator in February 2019 to take over management of the fund stable from Alter Domus.
Luxembourg’s financial regulator declined to comment on individual firms or court cases.
Reporting by Carolyn Cohn, Simon Jessop and Maiya Keidan; editing by David Evans
When you incorporate machine learning techniques to speed up SEO recovery, the results can be amazing.
This is the third and last installment from our series on using Python to speed SEO traffic recovery. In part one, I explained how our unique approach, that we call “winners vs losers” helps us quickly narrow down the pages losing traffic to find the main reason for the drop. In part two, we improved on our initial approach to manually group pages using regular expressions, which is very useful when you have sites with thousands or millions of pages, which is typically the case with ecommerce sites. In part three, we will learn something really exciting. We will learn to automatically group pages using machine learning.
As mentioned before, you can find the code used in part one, two and three in this Google Colab notebook.
Let’s get started.
URL matching vs content matching
When we grouped pages manually in part two, we benefited from the fact the URLs groups had clear patterns (collections, products, and the others) but it is often the case where there are no patterns in the URL. For example, Yahoo Stores’ sites use a flat URL structure with no directory paths. Our manual approach wouldn’t work in this case.
Fortunately, it is possible to group pages by their contents because most page templates have different content structures. They serve different user needs, so that needs to be the case.
How can we organize pages by their content? We can use DOM element selectors for this. We will specifically use XPaths.
For example, I can use the presence of a big product image to know the page is a product detail page. I can grab the product image address in the document (its XPath) by right-clicking on it in Chrome and choosing “Inspect,” then right-clicking to copy the XPath.
We can identify other page groups by finding page elements that are unique to them. However, note that while this would allow us to group Yahoo Store-type sites, it would still be a manual process to create the groups.
A scientist’s bottom-up approach
In order to group pages automatically, we need to use a statistical approach. In other words, we need to find patterns in the data that we can use to cluster similar pages together because they share similar statistics. This is a perfect problem for machine learning algorithms.
BloomReach, a digital experience platform vendor, shared their machine learning solution to this problem. To summarize it, they first manually selected cleaned features from the HTML tags like class IDs, CSS style sheet names, and the others. Then, they automatically grouped pages based on the presence and variability of these features. In their tests, they achieved around 90% accuracy, which is pretty good.
When you give problems like this to scientists and engineers with no domain expertise, they will generally come up with complicated, bottom-up solutions. The scientist will say, “Here is the data I have, let me try different computer science ideas I know until I find a good solution.”
One of the reasons I advocate practitioners learn programming is that you can start solving problems using your domain expertise and find shortcuts like the one I will share next.
Hamlet’s observation and a simpler solution
For most ecommerce sites, most page templates include images (and input elements), and those generally change in quantity and size.
I decided to test the quantity and size of images, and the number of input elements as my features set. We were able to achieve 97.5% accuracy in our tests. This is a much simpler and effective approach for this specific problem. All of this is possible because I didn’t start with the data I could access, but with a simpler domain-level observation.
I am not trying to say my approach is superior, as they have tested theirs in millions of pages and I’ve only tested this on a few thousand. My point is that as a practitioner you should learn this stuff so you can contribute your own expertise and creativity.
Now let’s get to the fun part and get to code some machine learning code in Python!
Collecting training data
We need training data to build a model. This training data needs to come pre-labeled with “correct” answers so that the model can learn from the correct answers and make its own predictions on unseen data.
In our case, as discussed above, we’ll use our intuition that most product pages have one or more large images on the page, and most category type pages have many smaller images on the page.
What’s more, product pages typically have more form elements than category pages (for filling in quantity, color, and more).
Unfortunately, crawling a web page for this data requires knowledge of web browser automation, and image manipulation, which are outside the scope of this post. Feel free to study this GitHub gist we put together to learn more.
Here we load the raw data already collected.
Each row of the form_counts data frame above corresponds to a single URL and provides a count of both form elements, and input elements contained on that page.
Meanwhile, in the img_counts data frame, each row corresponds to a single image from a particular page. Each image has an associated file size, height, and width. Pages are more than likely to have multiple images on each page, and so there are many rows corresponding to each URL.
It is often the case that HTML documents don’t include explicit image dimensions. We are using a little trick to compensate for this. We are capturing the size of the image files, which would be proportional to the multiplication of the width and the length of the images.
We want our image counts and image file sizes to be treated as categorical features, not numerical ones. When a numerical feature, say new visitors, increases it generally implies improvement, but we don’t want bigger images to imply improvement. A common technique to do this is called one-hot encoding.
Most site pages can have an arbitrary number of images. We are going to further process our dataset by bucketing images into 50 groups. This technique is called “binning”.
Here is what our processed data set looks like.
Adding ground truth labels
As we already have correct labels from our manual regex approach, we can use them to create the correct labels to feed the model.
We also need to split our dataset randomly into a training set and a test set. This allows us to train the machine learning model on one set of data, and test it on another set that it’s never seen before. We do this to prevent our model from simply “memorizing” the training data and doing terribly on new, unseen data. You can check it out at the link given below:
Model training and grid search
Finally, the good stuff!
All the steps above, the data collection and preparation, are generally the hardest part to code. The machine learning code is generally quite simple.
We’re using the well-known Scikitlearn python library to train a number of popular models using a bunch of standard hyperparameters (settings for fine-tuning a model). Scikitlearn will run through all of them to find the best one, we simply need to feed in the X variables (our feature engineering parameters above) and the Y variables (the correct labels) to each model, and perform the .fit() function and voila!
After running the grid search, we find our winning model to be the Linear SVM (0.974) and Logistic regression (0.968) coming at a close second. Even with such high accuracy, a machine learning model will make mistakes. If it doesn’t make any mistakes, then there is definitely something wrong with the code.
In order to understand where the model performs best and worst, we will use another useful machine learning tool, the confusion matrix.
When looking at a confusion matrix, focus on the diagonal squares. The counts there are correct predictions and the counts outside are failures. In the confusion matrix above we can quickly see that the model does really well-labeling products, but terribly labeling pages that are not product or categories. Intuitively, we can assume that such pages would not have consistent image usage.
Here is the code to put together the confusion matrix:
Finally, here is the code to plot the model evaluation:
Resources to learn more
You might be thinking that this is a lot of work to just tell page groups, and you are right!
Mirko Obkircher commented in my article for part two that there is a much simpler approach, which is to have your client set up a Google Analytics data layer with the page group type. Very smart recommendation, Mirko!
I am using this example for illustration purposes. What if the issue requires a deeper exploratory investigation? If you already started the analysis using Python, your creativity and knowledge are the only limits.
If you want to jump onto the machine learning bandwagon, here are some resources I recommend to learn more:
Got any tips or queries? Share it in the comments.
Hamlet Batista is the CEO and founder of RankSense, an agile SEO platform for online retailers and manufacturers. He can be found on Twitter .
Want to stay on top of the latest search trends?
Get top insights and news from our search experts.
Complete overivew of what Google Search Console is, what it does for your site, how to use it, and what you need to get started taking advantage of it today.
Last month, Google tested AR functionality in Google Maps. What are the implications of VPS, street view, and machine learning for local search and SEOs?
The robots.txt file is an often overlooked and sometimes forgotten part of a website and SEO. Here’s what it is, examples, how to’s, and tips for success.
What exactly agencies need when it comes to website audits and what to look for in choosing a tool. Five specific recommendations, screenshots, examples.
Want to stay on top of the latest search trends?
Get top insights and news from our search experts.
(This April 18 story corrects to show operating ratio declined, error also occurred in previous updates)
FILE PHOTO: A Union Pacific rail car is parked at a Burlington National Santa Fe (BNSF) train yard in Seattle, Washington, U.S., February 10, 2017. REUTERS/Chris Helgren
By Lisa Baertlein and Rachit Vats
(Reuters) – U.S. railroad operator Union Pacific Corp on Thursday reported a better-than-expected quarterly profit as price increases and cost controls offset the impact of severe winter weather and record flooding that damaged rails in the Midwest.
The quarter was a test for the second-largest U.S. railroad’s sweeping operational overhaul, and the results sent shares up 4.9 percent to $177.63.
Efforts to streamline operations and create surge capacity helped railway crews reroute the 50 to 60 daily trains that use the east-west main line that floodwaters severed for almost two weeks, Chief Executive Lance Fritz told Reuters.
“We’re gaining traction. … I see us coming back quickly and strongly,” Fritz said.
Net income at Union Pacific, which serves the Western two-thirds of the country, rose 6.2 percent to $1.4 billion, or $1.93 per share, in the first quarter. That topped analysts’ average forecast of $1.89, according to IBES data from Refinitiv.
Total operating revenue fell 1.7 percent to $5.4 billion. Weather and the U.S. trade war with China reduced export grain carloads, but pricing rose nearly 2.8 percent.
Expenses dropped 3.2 percent, assisted by workforce reductions and a switch to longer trains, which reduces fuel, maintenance and labor costs.
The Omaha, Nebraska-based company early this year hired former Canadian National Railway Co executive and turnaround expert Jim Vena as its chief operating officer and tasked him with overseeing its plan to lower costs and improve service and reliability.
Union Pacific’s first-quarter operating ratio – a measure of operating expenses as a percentage of revenue – declined 1 point to 63.6 percent, despite the weather disruptions. A lower ratio means more efficiency and higher profitability.
The railroad, which is working to get that key performance metric below 60 percent by 2020, said it was increasing network flexibility by reallocating investments.
For example, it paused work on its $550 million rail yard facility in Brazos, Texas, and earmarked unused 2019 capital for projects along its southwestern “Sunset” corridor. Seven rail lines converge at Brazos and the project, started in January 2018, was the largest facility investment in its 155-year history.
Transportation companies are a bellwether for business activity and investors are watching them closely as the global economy cools.
The U.S. economy is flashing warning signs as manufacturing softens and stimulus from the $1.5 billion tax-cut package ebbs.
Trade “is the thing that could tip us into a worse economy,” Fritz said.
Reporting by Rachit Vats in Bengaluru and Lisa Baertlein in Los Angeles; Editing by Dan Grebler and Peter Cooney
There is a right way to compose and send a guest-post or guest-article pitch, and there is a wrong way. Actually make those plural: “ways.”
Content marketing agency PointVisible partnered with influencer outreach tool Pitchbox to ask 80+ editors about guest-post pitches—how many they receive, when senders should follow up, what elements they should contain, what are common mistakes, what kinds of subject lines work, and more…
In addition to answering those questions, the editors—from medium- and high-authority business and marketing sites—provided insights and advice about what makes for a good pitch that they’ll actually open, read, and engage with.
(Reuters) – American Media Inc (AMI) said on Thursday it is selling its tabloid the National Enquirer to James Cohen, chief executive officer of Hudson Media.
U.S. tabloid newspaper the National Enquirer display rack is seen in Washington, U.S., April 10, 2019. REUTERS/Jeenah Moon
The National Enquirer had admitted to paying hush money to help U.S. President Donald Trump get elected and been accused of attempting to blackmail Amazon founder Jeff Bezos.
The weekly tabloid, along with two of its sister publications, will be purchased by the head of Hudson Media, whose family used to own the Hudson chain of airport newstands.
This comes a week after AMI said it was looking at “strategic options” for the National Enquirer as well as for the Globe and the National Examiner brands.
The sale is expected to reduce AMI’s debt to $355 million.
Earlier in the day, the Washington Post reported about the AMI’s decision to sell its tabloid to the head of Hudson News for $100 million.
Last week, the New York Times reported that owners of the National Enquirer were in talks to sell the tabloid to the California-based billionaire Ronald Burkle.
Besides, Paul Pope, one of the heirs of the National Enquirer founder Generoso Pope Jr, had also been in the list of bidders, according to media reports.
On Tuesday, Pope dropped his bid to buy the supermarket tabloid from American Media, the New York Post report said.
Over its 92-year history, the National Enquirer has enticed readers in supermarket checkout lines with sensational headlines and photos about celebrities. The tabloid’s website claims it has a viewership of 5 million.
Earlier in February, Amazon.com Inc CEO Bezos accused the publication of trying to blackmail him with the threat of publishing intimate photos.
(This version of the story clarifies in paragraph 3 Hudson Media CEO’s “family used to own the Hudson chain of airport newstands”)
Reporting by Arjun Panchadar and Vibhuti Sharma in Bengaluru; Editing by Maju Samuel and James Emmanuel
Experimentation is a core strategy for product development and growth at organizations that are winning in the market.
In a report on the Insights-Driven Business, Forrester demonstrated that businesses with closed-loop learning processes at their core are growing at least eight times faster than the global GDP. These learning processes have experimentation at their center.
Leading companies like Netflix, Uber, Amazon, Airbnb, Microsoft are quick to share that they are fuelled by experimentation. These are what we would call mature organizations—they leverage experimentation to generate continuous insights and growth that impact their bottom-line.
Pursuing experimentation maturity
At WiderFunnel, we have been working with brands to build and scale insight-generating experimentation programs for over 12 years. In doing that work, we have identified five phases of experimentation maturity that organizations progress through.
Level 1: Initiating Organizations at this stage are just getting started. An Experimentation Champion is working to get initial wins to prove the value of an experimentation program.
Level 2: Building In this stage, an organization is bought-in on the value of experimentation and an Experimentation Champion or team is establishing process and building the infrastructure to scale the program.
Level 3: Collaborating Organizations at this stage are expanding the experimentation program and collaborating across teams. Finalizing a communications plan and overall protocol for the program is a priority here.
Level 4: Scaling Experimentation is a core strategy for these organizations. Standards are in place and success metrics are aligned with overall business goals, enabling testing at scale.
Level 5: Driving The highest level of maturity. Experimentation is the organization’s growth and product strategy. The Amazon’s, Netflix’s, and Booking.com’s are here.
The organizational culture component
While there are multiple pillars of a mature experimentation organization—such as a powerful technology foundation and clear objectives for the program—developing a culture of experimentation is essential. In order to move from one phase into the next, you must foster an organizational culture that embraces testing and learning.
Experimentation at scale: A roadmap for the enterprise
A culture of experimentation is one very important component of a successful experimentation program. However, there are other factors to consider, including organizational structure, program KPIs, and hiring choices. This guide provides a step-by-step roadmap to get you where you need to go.
Ultimately, a culture of experimentation is a factor of effective communication. If you can’t get this piece right, your organization will never reach the highest level of maturity.
A hypothetical example
Imagine you are an Experimentation Champion trying to build a testing program from scratch. You have a vision for experimentation; you believe in its value and are excited to get moving. You are a one-person show, but you are convinced you can get the rest of your company on board.
To do this, you decide to get everyone involved in the experimentation program at the outset. You open the program up to the whole organization. You start sourcing ideas from everyone and everywhere.
But there’s a problem: Not everyone understands experimentation. People are contributing ideas without thinking about them, without thinking about the greater context. Soon, you are buried under ideas you can’t actually execute on.
And the experimentation program becomes a joke; a dumping ground where ideas go to die and nothing seems to be getting done. The teams around you lose faith and gradually stop submitting ideas. Eventually, experimentation becomes…
“Something we tried once. It doesn’t work for us.”
We see this story unfold all too often. But this doesn’t have to be your story. You, as an intrapreneur, can foster a culture of experimentation and avoid this crash and burn scenario.
Inspire. Educate. Inform.
You can do this by leveraging three essential actions in your communication: Inspire, Educate, and Inform.
Inspire means creating the spark. Inspiration occurs when you create a moment of clarity and awareness of new possibilities, as well as a desire to take action—to get involved.
Inspiration […] involves a moment of clarity and awareness of new possibilities. This moment of clarity is often vivid, and can take the form of a grand vision, or a “seeing” of something one has not seen before (but that was probably always there). Finally, inspiration involves approach motivation, in which the individual strives to transmit, express, or actualize a new idea or vision. [It] involves both being inspired by something and acting on that inspiration.
Educate means training. This action involves training, by instruction and/or supervised practice, in a particular skill. In this case, how to do experimentation.
Inform is closing the loop. Inform means actually communicating a message or making something known.
In the journey to a mature culture of experimentation, these three actions will need to be at play at different levels.
When your organization is just getting started with experimentation, testing is likely owned by a single, core team. Inspire, Inform, and Educate must be at work in this core team before your organization can move into the next phase.
To scale, your organization will need to empower supporting teams to participate in the experimentation program. The ultimate goal being an organization where every single person has an experimentation mindset, from your CEO to Lead Engineer, to the customer support heroes who pick up the phone everyday.
To get here, you will have to continuously Inspire, Educate, and Inform, to drive organizational change and foster the experimentation mindset. But what does that actually look like?
Just getting started with experimentation
In the early stages of maturity, a core team is often responsible for running optimization experiments. They are likely focused on getting initial buy-in for testing and building momentum around positive results.
One or a few team members should also be focused on Inspiring, Educating, and Informing necessary stakeholders, to lay the culture of experimentation foundation.
Inspiring your core team
As an Experimentation Champion, your first priority should be to recruit a core experimentation team. This team should include an Executive Sponsor (if that isn’t you), and individuals or partners who can execute experiments: Design, Engineering, Data Science, Experimentation Strategy, etc.
You may have these resources in-house, or you may decide to bring in an enabling partner to augment your capabilities. Either way, your core team should help you 1) develop and prioritize experiments, 2) execute experiments, and 3) socialize experiment results.
You will need to inspire the members of your core team to motivate them to get involved and stay involved in experimentation. At the outset, this means tailoring your message to each individual and showing them the new possibilities of testing. You should constantly ask yourself:
Who am I speaking to?
What do they care about?
A real-world example
One of our clients is a technology startup. Several months ago, the Head of Demand Generation decided to implement an experimentation program. She knew she needed to start by recruiting a small core experimentation team.
She began with an Executive Sponsor—the company’s VP of Revenue. This VP has an allstar sales background, but wasn’t familiar with the concepts of “experimentation” and “conversion optimization”. She needed to show him new possibilities that would matter to him.
The company had just finished a website redesign. In this context, the Champion worked carefully to explain to her VP that, while the project had succeeded from a brand and aesthetic angle, there were still potential points of friction in the user experience.
She pointed out that the company was spending substantial money to funnel traffic to the redesigned website, and emphasized the missed opportunity of not addressing these potential barriers to conversion—the opportunity to increase their primary metric by 2%, 5%, 10%.
She spoke to the VP in financial terms that mattered to him, and showed him the financial possibilities around marketing experimentation. And he got on board because he was inspired.
Educating within a core team
At this stage of experimentation maturity, Educate and Inform can often be done informally.
Your core team should have the skills to do experimentation, but they may not have complete understanding around why and how to do it. Design may understand UX best practices, but they may be wary of marketing experiments that could challenge brand standards. Engineering may be highly focused on product development, and lack the front-end development experience needed to develop marketing experiment variations.
Your best course of action in this case is to involve your Designer(s) and Engineer(s) in the conversation as early as possible. Remember, these are members of your core team. As such, they should be a part the experimentation conversation from start to finish.
This is education via supervised practice: by doing. As the Experimentation Champion, you should be guiding the conversation around overall objectives and experimentation frameworks. You should be educating the other members of your core team.
If you yourself are unsure about the in’s and out’s of testing, it may be a good idea to bring in an experimentation partner. With our technology client, the Champion knew she was missing critical pieces of a core team, including strategic support and dedicated resources. Which is why she brought in WiderFunnel as an enabling partner. In that role, we are able to work with her to transfer knowledge around experimentation to the members of her core team.
Informing within a core team
When it comes to Inform, you must make sure that you are closing the loop with each stakeholder on your core experimentation team.
This means informing your team members when an experiment is launched and informing them when and why it is completed. It means including them in the results analysis conversation and in determining next steps.
If someone is involved in an experiment, you must keep them informed, particularly regarding the impact of that experiment. Whether this is via email, Slack, or simply a face-to-face conversation, the importance of closing the loop cannot be overstated. Because nothing is quite as motivating as seeing the bottom-line impact of your work. And nothing is quite as de-motivating as contributing to a project and not knowing the results of your contribution.
One of the most important things to note at this stage is not to overreach. Focus on recruiting the core team and resources needed to get your experimentation program rolling. Work to Inspire, Educate, and Inform these key people—Engineering and Design, your Executive sponsor, and your Executive team.
As you scale the experimentation program and begin to empower supporting teams, your core team will need to Inspire, Educate, and Inform these supporting teams to get them up and running.
Of course, one person can’t shoulder Inspiring, Educating, and Informing for the entire organization. To support scale, you will want to implement systems that help to automate the actions of Inspire, Educate and Inform.
Building momentum for experimentation and driving organizational change
So what does that look like? Let’s look at a slightly more mature organization.
Another partner of ours is a large, digitally mature financial services company. When we partnered with this company, there was already Executive-level buy-in for testing, as well as a general understanding of the value of experimentation.
The core experimentation team had been assembled. It consisted of an Experimentation Champion, an Executive Sponsor, and WiderFunnel as an enabling partner. The function of this core team was to enable supporting teams (rather than to execute experiments). As an organization matures, this is often the role that a core experimentation team moves into—a facilitating, enabling role rather than an executing role.
At this particular organization, the core experimentation team was trying to enable eight different product marketing teams to develop and launch digital experiments.
Taking advantage of opportunities to Inspire your organization
In this case, the Experimentation Champion had taken advantage of an opportunity to inspire the larger organization.
One of the highest visibility product teams at this company had just gone through a page redesign, which was performing terribly. To address this, the Champion brought in WiderFunnel to analyze the redesigned page, identify potential barriers to conversion, design a variation, and launch an experiment to try to improve page performance.
The core team was very confident that the experiment would win because the redesign was performing so terribly. There was a lot of potential. And they were right—the variation performed much better than the redesign. Because this was such a high-visibility product, the whole organization was watching.
The lesson here? Be opportunistic about promoting the experimentation mindset. The other seven product teams saw, first-hand, the new possibilities associated with experimentation. And they were chomping at the bit to get started; to get involved.
But first things first.
At this stage, documentation becomes critical. When one small team owns and operates experimentation, you can get away with little to no documentation. But as you expand into supporting teams, centralized documents, processes, and standards become necessary to enable proper knowledge transfer. Documentation is one of the systems that helps to ‘automate’ the actions of Inspire, Educate, and Inform.
Enabling Inspiration, Education, and Informing at scale
To do this, the core team at this organization decided to leverage two primary activities: workshops and documentation.
Workshops became the primary vehicle for education and continued inspiration; documentation became the primary guide for how and whom to inform.
While the Champion in our previous example was educating her core team informally, via practical instruction, this organization needed to take a more formal approach to educate eight separate teams.
The core team needed to transfer knowledge around how to run experiments, but perhaps more importantly, around how to think about experimentation. The supporting teams had a very narrow view of ‘experimentation’. It was seen as a UX tactic—testing small tweaks to improve a particular conversion metric.
We wanted to educate these teams on the true potential of experimentation—how to use it to get real answers to real questions. To do this, we designed a series of workshops that walked these teams through various questions that they could and should be asking when developing experiment hypotheses.
There were questions that asked teams to refocus on their overall objectives and contextualize any experiment ideas within their broader goals.
There were questions that asked teams to refocus on their website visitors and customers and develop ideas based on qualitative and quantitative data: who their customers are and how they are using the website.
The workshops were also designed to lay a foundation for collaboration, asking teams to consider the experiments other teams were running and whether these insights might be relevant to their products and digital experiences.
These workshops were an interactive learning opportunity between the core team and the supporting teams. And this education was also inspirational: In asking new questions, the different team leads were able to envision entirely new possibilities for how to better develop and position their products leveraging experimentation. They were able to get excited about the possibilities.
Alongside these workshops, the core team was working to document a communications plan. The plan would have two main components:
Clarifying roles and responsibilities as the experimentation program scales, and
Clarifying who needs to be informed and how
To clarify roles and responsibilities, we recommend leveraging RACI, or a similar model. This model helps you map out who owns which piece of the overall task.
R = Responsible: The person who does the work to achieve the task.
A = Accountable: The person who is accountable for the correct and thorough completion of the task.
C = Consulted: The people who provide information for the project and with whom there is two-way communication.
I = Informed: The people kept informed of progress and with whom there is one-way communication.
Along with clarifying roles and responsibilities, you need a documented communications plan, which should:
Identify the information that needs to be communicated
List the methods of communication (formal and informal) and how they’ll be utilized
Determine the line of communication: Who communicates to whom?
Be intentional with timing
MailChimp is an example of an organization that prioritizes communication, and they have seen a lot of success in scaling their experimentation program. I first spoke to the Experimentation Champion there a year ago—she was just getting the program off the ground in the Marketing team. Today, MailChimp is testing on their Marketing site, on 3 of 9 of their product domains, as well as within their technical content team.
The Champion I’m referring to is Lauren Schuman, now Senior Director of Product Insights & Growth. Lauren explained that documentation and communication were a priority from the very beginning at MailChimp.
We documented every single step of the workflow and did a RACI Model associated with it—so that we were very clear on who was doing what, who we needed to consult versus inform. We then mapped the actual communication strategy […] And this has been a major contributor to how we’ve scaled and how we’ll be able to scale in the future.
—Lauren Schuman, Senior Director of Product Insights & Growth at MailChimp
Recently, Lauren shared that she now refers to her internal communications plan as a “marketing plan”, adopting a different mentality. She is marketing the experimentation mindset internally, identifying segments, channels, and relevant messages. The intention is to drive enthusiasm, affinity, and adoption.
Getting your organization psyched about experimentation
Inspire, Educate, Inform. These three actions are tools you must use as you scale your experimentation program. You should determine when to create systems that enable these actions at each stage of maturity.
When you assemble your core experimentation team—as the Champion, you have to make sure that you are Inspiring, Educating, and Informing your key stakeholders. Whether this is happening informally or formally, your core team should be excited about experimentation, they should understand it and its value, and the information loop should be closed before you attempt to expand the program into supporting teams.
As you scale, build systems that will help you automate these actions as you pursue the ultimate goal: A culture of experimentation that permeates your entire organization.
Keep in mind that your communications strategy, like experimentation, is an iterative process. Don’t expect perfection right away; work with your core team and, eventually, your supporting teams to determine which messages and channels work best. Be willing to evolve your strategy, but stay committed to Inspiring, Educating, and Informing the people around you.
Are you working to shift organizational culture and promote an experimentation mindset? What challenges are you facing? What successes are you seeing? We’d love to hear from you! Leave your thoughts in the comments section below.
Michael St Laurent
Director of Experimentation Strategy & Product Development Lead
Benchmark your experimentation maturity with our new 7-minute maturity assessment and get proven strategies to develop an insight-driving growth machine.