Please disable your Ad Blocker to better interact with this website.

News Clash

FB Exec’s Memo: Trump Didn’t ‘Cheat’ — He Ran The Best Digital Ad Campaign, Ever

“He got elected because he ran the single best digital ad campaign I’ve ever seen from any advertiser. Period.”

This is not some MAGA-hat wearing ‘shill’ for Trump. This is someone who maxed out his donation to the Hildebeest. But he’s calling it how he sees it. He’s even saying that FB should NOT abuse it’s reach in an effort to tilt the 2020 election result… using a Lord of the Rings analogy about the ring of power.

Andrew Bosworth, a Facebook Executive, weighed in on his own thoughts and observations about all the hype about 2016 — Facebook ads, Russian attempts to impact the election, Cambridge Analytica and all the rest.

He doesn’t see Russian collusion making a difference. He calls the Cambridge Analytica fiasco BS, snake-oil sales people trying to inflate their own importance.

Here’s the entire (rather long) memo. Judge for yourself.

While it originally appeared in the NYTimes, it’s behind a paywall. We sourced this one from ‘The Verge

Thoughts for 2020

The election of Donald Trump immediately put a spotlight on Facebook. While the intensity and focus of that spotlight may be unfair I believe it isn’t unjust. Scrutiny is warranted given our position in society as the most prominent of a new medium. I think most of the criticisms that have come to light have been valid and represent real areas for us to serve our community better. I don’t enjoy having our flaws exposed, but I consider it far better than the alternative where we remain ignorant of our shortcomings.

One trap I sometimes see people falling into is to dismiss all feedback when they can invalidate one part of it. I see that with personal feedback and I see it happening with media coverage. The press often gets so many details wrong it can be hard to trust the veracity of their conclusions. Dismissing the whole because of flaws in parts is a mistake. The media has limited information to work with (by our own design!) and they sometimes get it entirely wrong but there is almost always some critical issue that motivated them to write which we need to understand.

It is worth looking at the 2016 Election which set this chain of events in motion. I was running our ads organization at the time of the election and had been for the four years prior (and for one year after). It is worth reminding everyone that Russian Interference was real but it was mostly not done through advertising. $100,000 in ads on Facebook can be a powerful tool but it can’t buy you an American election, especially when the candidates themselves are putting up several orders of magnitude more money on the same platform (not to mention other platforms).

Instead, the Russians worked to exploit existing divisions in the American public for example by hosting Black Lives Matter and Blue Lives Matter protest events in the same city on the same day. The people who shows up to those events were real even if the event coordinator was not. Likewise the groups of Americans being fed partisan content was real even if those feeding them were not. The organic reach they managed sounds very big in absolute terms and unfortunately humans are bad at contextualizing big numbers. Whatever reach they managed represents an infinitesimal fraction of the overall content people saw in the same period of time and certainly over the course of an election across all media.

So most of the information floating around that is widely believed isn’t accurate. But who cares? It is certainly true that we should have been more mindful of the role both paid and organic content played in democracy and been more protective of it. On foreign interference, Facebook has made material progress and while we may never be able to fully eliminate it I don’t expect it to be a major issue for 2020.

Misinformation was also real and related but not the same as Russian interference. The Russians may have used misinformation alongside real partisan messaging in their campaigns, but the primary source of misinformation was economically motivated. People with no political interest whatsoever realized they could drive traffic to ad-laden websites by creating fake headlines and did so to make money. These might be more adequately described as hoaxes that play on confirmation bias or conspiracy theory. In my opinion this is another area where the criticism is merited. This is also an area where we have made dramatic progress and don’t expect it to be a major issue for 2020.

It is worth noting, as it is relevant at the current moment, that misinformation from the candidates themselves was not considered a major shortcoming of political advertising on FB in 2016 even though our policy then was the same as it is now. These policies are often covered by the press in the context of a profit motive. That’s one area I can confidently assure you the critics are wrong. Having run our ads business for some time it just isn’t a factor when we discuss the right thing to do. However, given that those conversations are private I think we can all agree the press can be forgiven for jumping to that conclusion. Perhaps we could do a better job exposing the real cost of these mistakes to make it clear that revenue maximization would have called for a different strategy entirely.

Cambridge Analytica is one of the more acute cases I can think of where the details are almost all wrong but I think the scrutiny is broadly right. Facebook very publicly launched our developer platform in 2012 in an environment primarily scrutinizing us for keeping data to ourselves. Everyone who added an application got a prompt explaining what information it would have access to and at the time it included information from friends. This may sound crazy in a 2020 context but it received widespread praise at the time. However the only mechanism we had for keeping data secure once it was shared was legal threats which ultimately didn’t amount to much for companies which had very little to lose. The platform didn’t build the value we had hoped for our consumers and we shut this form of it down in 2014.

The company Cambridge Analytica started by running surveys on Facebook to get information about people. It later pivoted to be an advertising company, part of our Facebook Marketing Partner program, who other companies could hire to run their ads. Their claim to fame was psychographic targeting. This was pure snake oil and we knew it; their ads performed no better than any other marketing partner (and in many cases performed worse). I personally regret letting them stay on the FMP program for that reason alone. However at the time we thought they were just another company trying to find an angle to promote themselves and assumed poor performance would eventually lose them their clients. We had no idea they were shopping an old Facebook dataset that they were supposed to have deleted (and certified to us in writing that they had).

When Trump won, Cambridge Analytica tried to take credit so they were back on our radar but just for making bullshit claims about their own importance. I was glad when the Trump campaign manager Brad Parscale called them out for it. Later on, we found out from journalists that they had never deleted the database and had instead made elaborate promises about its power for advertising. Our comms team decided it would be best to get ahead of the journalists and pull them from the platform. This was a huge mistake. It was not only bad form (justifiably angering the journalists) but we were also fighting the wrong battle. We wanted to be clear this had not been a data breach (which, to be fair to us, it absolutely was not) but the real concern was the existence of the dataset no matter how it happened. We also sent the journalists legal letters advising them not to use the term “breech” which was received normally by the NYT (who agreed) and aggressively by The Guardian (who forged ahead with the wrong terminology, furious about the letter) in spite of it being a relatively common practice I am told.

In practical terms, Cambridge Analytica is a total non-event. They were snake oil salespeople. The tools they used didn’t work, and the scale they used them at wasn’t meaningful. Every claim they have made about themselves is garbage. Data of the kind they had isn’t that valuable to being with and worse it degrades quickly, so much so as to be effectively useless in 12-18 months. In fact the United Kingdom Information Commissioner’s Office (ICO) seized all the equipment at Cambridge Analytica and found that there was zero data from any UK citizens! So surely, this is one where we can ignore the press, right? Nope. The platform was such a poor move that the risks associated were bound to come to light. That we shut it down in 2014 and never paid the piper on how bad it was makes this scrutiny justified in my opinion, even if it is narrowly misguided.

So was Facebook responsible for Donald Trump getting elected? I think the answer is yes, but not for the reasons anyone thinks. He didn’t get elected because of Russia or misinformation or Cambridge Analytica. He got elected because he ran the single best digital ad campaign I’ve ever seen from any advertiser. Period.

To be clear, I’m no fan of Trump. I donated the max to Hillary. After his election I wrote a post about Trump supporters that I’m told caused colleagues who had supported him to feel unsafe around me (I regret that post and deleted shortly after).

But Parscale and Trump just did unbelievable work. They weren’t running misinformation or hoaxes. They weren’t microtargeting or saying different things to different people. They just used the tools we had to show the right creative to each person. The use of custom audiences, video, ecommerce, and fresh creative remains the high water mark of digital ad campaigns in my opinion.

That brings me to the present moment, where we have maintained the same ad policies. It occurs to me that it very well may lead to the same result. As a committed liberal I find myself desperately wanting to pull any lever at my disposal to avoid the same result. So what stays my hand?

I find myself thinking of the Lord of the Rings at this moment. Specifically when Frodo offers the ring to Galadrial and she imagines using the power righteously, at first, but knows it will eventually corrupt her. As tempting as it is to use the tools available to us to change the outcome, I am confident we must never do that or we will become that which we fear.

The philosopher John Rawls reasoned that the only moral way to decide something is to remove yourself entirely from the specifics of any one person involved, behind a so called “Veil of Ignorance.” That is the tool that leads me to believe in liberal government programs like universal healthcare, expanding housing programs, and promoting civil rights. It is also the tool that prevents me from limiting the reach of publications who have earned their audience, as distasteful as their content may be to me and even to the moral philosophy I hold so dear.

That doesn’t mean there is no line. Things like incitement of violence, voter suppression, and more are things that same moral philosophy would safely allow me to rule out. But I think my fellow liberals are a bit too, well, liberal when it comes to calling people Nazi’s.

If we don’t want hate mongering politicians then we must not elect them. If they are getting elected then we have to win hearts and minds. If we change the outcomes without winning the minds of the people who will be ruled then we have a democracy in name only. If we limit what information people have access to and what they can say then we have no democracy at all.

This conversation often raises the alarm around filter bubbles, but that is a myth that is easy to dispel. Ask yourself how many newspapers and news programs people read/watched before the internet. If you guessed “one and one” on average you are right, and if you guessed those were ideologically aligned with them you are right again. The internet exposes them to far more content from other sources (26% more on Facebook, according to our research). This is one that everyone just gets wrong.

The focus on filter bubbles causes people to miss the real disaster which is polarization. What happens when you see 26% more content from people you don’t agree with? Does it help you empathize with them as everyone has been suggesting? Nope. It makes you dislike them even more. This is also easy to prove with a thought experiment: whatever your political leaning, think of a publication from the other side that you despise. When you read an article from that outlet, perhaps shared by an uncle or nephew, does it make you rethink your values? Or does it make you retreat further into the conviction of your own correctness? If you answered the former, congratulations you are a better person than I am. Every time I read something from Breitbart I get 10% more liberal.

What does all of this say about the nature of the algorithmic rewards? Everyone points to top 0.1% content as being acutely polarized but how steep are the curves? What does the top 1% or 5% look like? And what is the real reach across those curves when compared to other content? I think the call for algorithmic transparency can sometimes be overblown but being more transparent about this type of data would likely be healthy.

What I expect people will find is that the algorithms are primarily exposing the desires of humanity itself, for better or worse. This is a Sugar, Salt, Fat problem. The book of that name tells a story ostensibly about food but in reality about the limited effectiveness of corporate paternalism. A while ago Kraft foods had a leader who tried to reduce the sugar they sold in the interest of consumer health. But customers wanted sugar. So instead he just ended up reducing Kraft market share. Health outcomes didn’t improve. That CEO lost his job. The new CEO introduced quadruple stuffed Oreos and the company returned to grace. Giving people tools to make their own decisions is good but trying to force decisions upon them rarely works (for them or for you).

In these moments people like to suggest that our consumers don’t really have free will. People compare social media to nicotine. I find that wildly offensive, not to me but to addicts. I have seen family members struggle with alcoholism and classmates struggle with opioids. I know there is a battle for the terminology of addiction but I side firmly with the neuroscientists. Still, while Facebook may not be nicotine I think it is probably like sugar. Sugar is delicious and for most of us there is a special place for it in our lives. But like all things it benefits from moderation.

At the end of the day we are forced to ask what responsibility individuals have for themselves. Set aside substances that directly alter our neurochemistry unnaturally. Make costs and trade-offs as transparent as possible. But beyond that each of us must take responsibility for ourselves. If I want to eat sugar and die an early death that is a valid position. My grandfather took such a stance towards bacon and I admired him for it. And social media is likely much less fatal than bacon.

To bring this uncharacteristically long and winding essay full circle, I wanted to start a discussion about what lessons people are taking away from the press coverage. My takeaway is that we were late on data security, misinformation, and foreign interference. We need to get ahead of polarization and algorithmic transparency. What are the other big topics people are seeing and where are we on those?

Wes Walker

Wes Walker is the author of "Blueprint For a Government that Doesn't Suck". He has been lighting up Clashdaily.com since its inception in July of 2012. Follow on twitter: @Republicanuck