Free Novel Read

Frenemies Page 26


  Since Facebook was not transparent, in 2016 anxiety and mistrust escalated. Complaints were lodged against digital platforms for charging advertisers for ads that tricked other machines to believe an ad message was seen by humans when it was not, or the algorithms placed many friendly ads on unfriendly platforms. “What hasn’t been reported on,” 21st Century Fox CEO James Murdoch said over breakfast in January 2016, “is how much fraud there is in the industry. It has famously been said of advertising, ‘I know I’m wasting half my money, but I don’t know which half.’ Digital advertising was supposed to solve that. But, actually, questions of viewability, questions of bots, of creating false impressions, the leakage—the amount of fraud is staggering.” A report written two years earlier by Pivotal Research Group analyst Brian Wieser warned: “Perpetrators of fraud and sellers of wasteful inventory in general become increasingly clever as they look for new ways to produce the appearance of traffic. . . . This is all the greater an issue as programmatic buying of media takes root,” and buyers increasingly focus on “less expensive long-tail inventory,” not premium platforms. By the end of 2016, one study said ad fraud cost advertisers $12.5 billion.*

  * * *

  ■ ■ ■

  Programmatic advertising’s ability to organize and crunch data and target individuals in two tenths of a second relies—as do digital assistants like Alexa or Viv—on AI, also known as machine learning. Since AI first gained serious attention in the 1950s, a debate has raged: Is AI really intelligence or is it a machine programmed to memorize data? And are AI robots potentially uncontrollable Frankenstein monsters? There is real fear about how to sell the advantages of AI without scaring people with such images.

  This was a key subject at another General Electric monthly marketing meeting at their Madison Avenue offices in late 2016. Opening a discussion about their marketing strategy for the coming year, CMO Linda Boff said to the half dozen agency representatives gathered around a large table, “The value of a digital company is much higher than an industrial company.” GE needed a defining message and a market. “Amazon and Facebook did that. When we think of our story, what makes us different than IBM? They’ve done well selling artificial intelligence and Watson. But AI is kind of creepy. What do we do to make GE more efficient that is not creepy?”

  “We want to harness machines, not let machines harness us,” chimed her deputy, Andy Goldberg.

  “How do we talk about that?” Boff asked. “Do we in 2017 take six cases and every month tell a different story?”

  “This is the focus,” Goldberg said, drilling down. “One thing we all need in storytelling is the human element. Our humor is important. We’re human.” We’re not machines.

  “We need fresh ways to tell the GE story of moving from an industrial to a digital company,” Boff said. How “do we tell stories of outcomes in human terms?”

  “As Linda often says, ‘We need people to fall in love with us,’” Goldberg said.

  “One advantage we have,” said one of several representatives from the Giant Spoon agency, “is that we can tell stories from factory floors, and IBM can’t.”

  They would not reach a consensus in this meeting. Boff ended the discussion by asking the six agency reps to come up with ideas to humanize AI and GE. They would reconvene for an all-day meeting she would schedule.

  At a minimum, there is an AI consensus, as one of Microsoft’s Distinguished Engineers, James Whittaker, has written: AI relies on three things, “It is, first, a vast amount of data which is, second, organized so well a computer can understand its structure and relevance and then, third, crunched at blazingly fast speeds. The reason that progress in AI has seemed so pronounced in the past few years is that technological advances in all three areas have accelerated.”* The race to dominate AI reveals companies with deep pockets—Google, Facebook, Amazon, IBM, Oracle, Apple, Salesforce.com, among others—vying to hire engineers and data scientists.

  “The bulk of Fox advertising will be sold by machines,” predicts James Murdoch, who goes on to say this will threaten the existence of the advertising holding companies. “The bulk of their business, the buying of media and the analysis of how to generate reach at a low incremental cost, it’s hard to see what their role is twenty years from now. Once you have an investment in machines, it’s just a math problem.” Brian Lesser agrees that machines will perform functions many of GroupM employees now perform, but he believes those machines will be employed by GroupM.

  * * *

  ■ ■ ■

  Whittaker has argued that traditional advertising will become less relevant because when data is assembled the machines will “determine someone’s intent, not their interest,” as a search merely does. Airbnb’s CMO Jonathan Mildenhall thinks agencies will be further disrupted because AI will allow “customization.” He then offers a sweeping generalization that marginalizes human creativity: As the “algorithms anticipate profiles of individuals, brands can engineer without the need for human creativity.” The machines will craft the ads. Few question that as advertisers know more about individuals and their actual desires, precise marketing messages can be pushed to them. The pushed messages from Viv’s personal assistant could disrupt advertising, Dag Kittlaus said after his TechCrunch presentation. “Priceline is the largest travel company in the world,” he explained. “They spend over one billion dollars a year on Google’s AdWords. But imagine if we have tools for Priceline to build a transformative travel agent. Imagine that you say, ‘I want to take my kids away in the third week of March. Find a place in the Caribbean to take my kids. Your personal assistant looks at your last five trips so it has a general idea of the budget parameters. It knows what the weather’s going to be there.” The personal assistant works it out with Priceline. Instead of expensive ads, Priceline would pay a fee to Viv only when a sale is registered. Thus Kittlaus believes: “People will migrate away from what I call the discovery economy on the Internet to a consumption economy.”

  In this consumption economy, plays or restaurants or car services would also pay a service fee for Viv-referred business. Imagine, he said, that you arranged a date on Match.com. Your digital assistant knows you and your date both like the theater, and asks, “Would you like me to get tickets to this show? Would you like me to have a car pick you up? Make a reservation at this restaurant?” Of course, if Viv only recommended the restaurants or vendors who paid them the steepest fees or who were owned by a corporate parent, it might lose the confidence of consumers. Or run afoul of government regulators, as has happened to Google and Facebook in Europe.

  Viv threatened to be a classic start-up in a garage menace to Google, Facebook, Amazon, and Watson, but to succeed Kittlaus knew he needed scale. He needed a large, existing base of mobile phones as a partner in order to have Viv installed on millions of smartphones. In the fall of 2016, Viv was acquired by Samsung. But when Samsung introduced its new Galaxy smartphone in 2017, the Viv digital assistant was not included.

  Andrew Robertson, CEO of BBDO, and a Mad Men look-alike in a grey pin-striped suit, white suspenders, and polka-dot tie, disagrees that machines will cripple creative agencies. Nor does he believe the desire for creative ads will disappear. He agrees that machines will target individuals with precision, altering the traditional way agencies operate and threatening agencies that fail to change. He agrees that traditional ad formats have to change. But he disagrees that machines can create compelling ads. Because there are so many more platforms on which ads appear, “the need for creativity goes up every single day because you are seeing more ads than you ever saw before.” And with video becoming the principal way for advertisers to reach consumers on mobile devices, and with just the first two to three seconds of that video to win the consumers attention, he concludes, “Creativity becomes more important. So Math Men and Mad Men are joined.”

  The other potentially disruptive technology is what’s come to be called the Internet of
things, or IoT, “smart devices” with Bluetooth connections—refrigerators, light bulbs, watches, thermostats, washing machines, coffeepots, cars, baby pacifiers, and so on. In 2016, Gartner, Inc., a technology research firm, estimated that there were 6.4 billion connected “things,” and this number would jump to 20.8 billion in four years. These smart devices will yield a cornucopia of data. Devices monitor and can alert your store when the milk or ketchup in your refrigerator needs replenishment, when your washing machine needs more soap, when a device on top of your TV monitoring your facial expressions communicates whether you watch a commercial. There are, of course, a plethora of unanswered questions: Will these devices be welcoming to marketing messages? Why can’t simple marketing messages—your light bulb will soon expire—be crafted by AI without input from an agency? Will devices that give brands a direct relationship with consumers reduce ad spending because, for instance, Heinz ketchup is in touch with your refrigerator? Will another device, virtual reality glasses, be conducive to native or product placement ads as you’re transported to a U2 performance or the surface of Mars? With devices in homes connected to marketers, will consumers recoil, feeling spied upon?

  What is unassailable is that the combination of rich data and technology fundamentally transforms marketing. Some of the guessing as to who saw a marketing message and whether they needed or wanted a product dwindles. Ads sprayed to demographic groups can be aimed at individuals. With online sales, geography becomes less important. And with customers having more information about merchandise, coupled with a growing unwillingness to be interrupted by ad pitches, many products will become commodities. This will lead, inevitably, to brands seeking to develop direct consumer relationships, as when Bevel, a grooming product for men of color, inaugurated Bevel Code, a site providing information and a community for its customers. Agencies, Rishad Tobaccowala says, “have to increasingly think less about advertising and more about how to deliver utilities and services.” He cited Walgreens’ enticing app, hailing it as “the best form of utilities and services in the United States.” Every time he enters a Walgreens he gets points, which he need not record because they are automatically added to his app. He receives alerts that his prescriptions are ready, and he gets five hundred Balance Rewards points for every refill. He can send photos for Walgreens to print. It offers ways to save money. “This is a utility and a service. We have to stop just creating ads and start creating experiences.” By creating experiences, Tobaccowala predicts clients will shift and “spend more on marketing than on advertising,” thus increasing the spending disparity that exists today.

  * * *

  ■ ■ ■

  Central to this shift is technology, which has an underside, as Facebook has painfully learned. On the eve of Advertising Week in September 2016, it was revealed that the Math Men at Facebook overestimated the average time viewers spent watching video by up to 80 percent. The mistake was made because for two years their engineers only counted videos that were watched for more than three seconds, when large numbers of viewers only watch for a second or two. Facebook did its measurement based on two numbers: the number who viewed the video for more than three seconds, and the average time spent on the video. The math mistake was that Facebook calculated the average time spent by totaling all the video watch time, including those who watched for less than three seconds. In arriving at the average time watched, Facebook divided by the number of those who watched more than three seconds, inflating the average view time. When this surfaced, client and agency confidence was shaken. This proved, Martin Sorrell and Keith Weed independently said, the need for Facebook to open its walled garden and allow an independent measurement of its results. Bob Liodice chimed his agreement.

  Days later, at an Advertising Week panel, Carolyn Everson addressed their “video metric error,” and said that while it understandably shook advertiser confidence, it did not cost advertisers a single dollar extra or impact their return on investment measurement, a conclusion seemingly shared by most of the advertising community. She said the mistake was “a lesson learned.” The lesson? They should have made the disclosure when they learned of the mistake a month earlier, she said. She also said Facebook does believe in “third-party verification” of its data, a claim hotly disputed by the ad community.

  Unfortunately for the otherwise popular Everson, Facebook would punch itself in the nose again just two months later. An internal audit, the company disclosed, revealed that Facebook miscounted views on four of its products, including time spent with publications on its Instant Articles program. Facebook’s machines inadvertently counted repeat visitors more than once when reporting its visitor total. Once again, although the ad community was unhappy, they agreed: the mistakes did not penalize them financially. Once again, the ad community complained loudly about Facebook’s failure to have its data independently monitored.

  Then it happened again. Over the next several months and into the start of 2017, Facebook would admit a total of ten measurement mistakes. Google also reluctantly admitted measurement errors, the most egregious being programmatic buying by Facebook and YouTube that placed friendly ads on unfriendly sites, including racist, extremist, and pornography sites. None were done venally. They were ads targeted by keywords, like the Confederacy or race. But they undermined trust and strengthened the ad community’s claim that a referee was needed so Facebook and Google no longer graded their own homework.

  By early 2017, the advertising community was less forgiving. Walmart and PepsiCo and others pulled ads from YouTube. Bob Liodice assailed digital platforms for harming brands and called for an audit of their spending. Appearing in January before the Annual Leadership Meeting of the organization that represents digital companies, the Interactive Advertising Bureau, Procter & Gamble’s Marc Pritchard declared, “The days of giving digital a pass are over.” He stipulated a 5-point program that the world’s largest advertiser expected digital companies to comply with—or else. Citing “brand safety” concerns, Havas pulled its ads off Google in London. Martin Sorrell slammed Google for failing “to step up and take responsibility.”

  * * *

  ■ ■ ■

  Google and Facebook sought to assuage advertisers, offering contrite promises to fix their mistakes, to welcome more independent measurement. But these were not mistakes that could be so easily fixed. The limitations of Math Men were parading across the runway. “We’ve gone from the era of Mad Men to mad metrics,” News Corp’s CEO Robert Thomson declared at a UBS Global Media and Communications Conference. An overreliance on machines and a belief that they were engaged in mistake-proof science produced an opaque mathematical model. Although engineers are fallible humans, they often assume their algorithms are infallible. When told that Facebook’s mechanized defenses had failed to screen out “fake” news planted on the social network to sabotage Hillary Clinton’s presidential campaign, Mark Zuckerberg publicly dismissed the assertion as “crazy.” His computers—his science—wouldn’t allow it. Or as Eric Schmidt, the executive chairman of Alphabet, the parent company of Google, said at their 2017 annual shareholder meeting, “We start from the principles of science at Google and Alphabet.” By relying on programmatic machines to use algorithms to make ad buys, and attaching ads to certain words, the computer might target a consumer who spends time doing history-themed searches on Google. However, the machine is capable of dumbly placing the ad on a white-supremacy site that cites history, or its version of history, offering one clear limitation of AI.

  The University of Michigan’s Christian Sandvig demonstrates how a reliance on tools like search algorithms can be faulty guides. He offers an example of an insurance company that purchased data and might blackball an insurance applicant because he spent a lot of time searching Alcoholics Anonymous. “A lot of judgments the insurance company may make are unreliable. Maybe you were Googling Alcoholics Anonymous for a friend. Maybe a lot of people in the house use your computer.” The weaknesses o
f an overreliance on algorithms was exposed by a ProPublica reporting team that examined computerized “risk scores” that are used across the country to determine if those arrested are high or low risks to commit future crimes. They obtained the risk scores of seven thousand of those arrested in Broward County, Florida, and found that the risk-factor algorithm “proved remarkably unreliable” because only 20 percent of those “predicted to commit violent crimes actually went on to do so.” White defendants with multiple arrests “were mislabeled as low risk more often than black defendants” with scant rap sheets. If you had a job and committed crimes and had a criminal record, the algorithm likely ranked you as a lower crime risk than a homeless black man.*

  Cathy O’Neil, a data scientist, explored the biases of the algorithms that increasingly rule our lives in her 2016 book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. For people with bad credit scores who live in high-crime precincts, she writes, algorithms shower “them with predatory ads for subprime loans,” and the same data is used to “block them from jobs” and drive their credit rating down.

  Mathematical algorithms and AI tools are important, but limited, says Wendy Clark of DDB. It offers science but not art. “Play it out in your life,” she continues. “You go to a neighbor’s cocktail party. You have a conversation with someone that is quite generic—‘What did you do today? How was work? How many kids do you have?’” That is her version of an algorithmic conversation. On the other hand, if at the same cocktail party you meet someone and say, “‘Oh wow! You’re a nurse! My mother was a nurse! Gosh, tell me about where you work?’” The first conversation is generic, the second, engaging. What humans have that machines lack is empathy. If consumers are swayed by emotion, then the copywriter has some advantages over the algorithm.