The fallout from the Cambridge Analytica scandal showed that consumers care how their data is used, even if they don’t understand it fully. Marketers shouldn’t be exploiting that knowledge gap—we should be trying to close it.
When the Cambridge Analytica scandal broke in March 2018, my response betrayed my profession. What, I wondered, was everyone so surprised by?
To be perfectly clear, what Cambridge Analytica did was unethical. They collected information under the guise of academic research, then monetized that information in a way users never consented to. But for my fellow digital marketers, what they were able to do should not have been the least bit surprising. The consumer data landscape is such that it was only a matter of time before someone did it.
The true lesson of the Cambridge Analytica scandal wasn’t that there are bad actors who are going to exploit Facebook and other digital platforms for their own ends. It’s that the average consumer doesn’t understand how their personal information is being gathered and monetized in exchange for a (mostly) free internet—and when it’s revealed to them, they are shocked and upset.
With that understanding, it’s worth marketers asking some uncomfortable questions: How would consumers react if they knew exactly how we use their data? What makes us different from Cambridge Analytica, and where do we need to step up?
Let’s take a step back. According to Facebook, Aleksandr Kogan, the researcher who developed the app that harvested the data in the first place, “requested and gained access to information from users who chose to sign up to his app, and everyone involved gave their consent. People knowingly provided their information, no systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.” The information in question was all information available to a user’s Facebook friends: the city listed on their profile, content they liked, and who their friends were—not exactly state secrets.
So the information uncovered was innocuous, and everyone opted in to sharing that data by downloading the app (or, in the case of friends of app downloaders whose information was also scraped, by the privacy settings on the profiles). What’s the big deal?
The big deal is two-fold.
First, users may technically have consented to sharing their information, but they overwhelmingly seem not to have understood what they consented to. This isn’t a big surprise: According to a 2017 study by Deloitte, 91 percent of people accept terms and conditions without reading them. From a purely legal perspective, consumer naïveté can excuse a whole host of sins on the part of technology companies, but it’s cold comfort for the user who trusted, rightly or wrongly, that they weren’t signing up for anything nefarious.
Second, this information, though technically shared with consent, was then used in a manner that users found objectionable. Kogan passed the information to Cambridge Analytica without users’ knowledge—not exactly the psychological research it was supposedly being used for—and the political consultancy then used that information to target users with political ads.
Facebook has, rightfully, faced intense criticism for enabling Cambridge Analytica in the first place. Though their initial response left a lot to be desired, in the year that followed they’ve examined their complicity in consumer-unfriendly practices within their ecosystem. From kicking out third-party data vendors to Zuckerberg’s recent manifesto on a “privacy-focused vision” for the future of the company, Facebook appears to have taken the backlash to heart. (To be fair, they had some serious legal incentive to do so.)
But what about marketers’ self-examination? The political aspect obscures how much Cambridge Analytica’s actions resemble common marketing practices. Replace ads for Donald Trump or Brexit with ads for dog food or baby wipes, and what Cambridge Analytica did doesn’t sound so out there. Sourcing consumer data from third parties, then using that information to target people with persuasive messaging? That’s a normal day in marketing land.
Writing for Marketing Week in the immediate aftermath of the scandal, Mark Ritson acerbically noted, “In the past two weeks I’ve had the same uncomfortable conversation with several senior marketers. We start with a quick summary of what is going on in the Cambridge Analytica saga. We shift to what this might mean for Facebook, digital media and general marketing. And then, after a pause, the marketer inevitably says with a sheepish grin: ‘We’ve been doing this shit for years.’”
Nearly three-quarters of consumers say Cambridge Analytica made them concerned about how their personal information is used. But if that reaction prompted a period of soul-searching for marketers who’ve been “doing this shit for years,” then they’ve been doing it very, very quietly.
No one is genuinely advocating that marketers abandon consumer data use altogether. Not only would it be unrealistic (it’s a business practice that significantly predates the digital era, after all), but it doesn’t serve to benefit companies or consumers to do so. According to research from Epsilon, 80 percent of consumers report being more likely to purchase from a brand that offers them a personalized experience—experiences companies can’t deliver without at least some personal information about those consumers.
But that doesn’t let marketers off the hook. If consumers felt violated by Cambridge Analytica, we have to be clear on how we’re guarding against committing the same offenses or risk facing the same backlash.
Cambridge Analytica blatantly exploited a major consumer knowledge gap about how personal information is collected and sold between parties. The scandal laid clear that what people consent to by default and what they believe they have consented to are not the same thing—and while a letter-of-the-law opt-in is a nice CYA for companies should things go south, it does nothing to generate consumer trust.
With this in mind, for companies to collect and activate consumer data while retaining consumer trust, two things need to happen.
First, marketers need to wean themselves off third-party data and approach second-party data with caution. Cambridge Analytica serves as a cautionary tale for data-sharing: While Facebook users had legally given their consent for their personal information to be shared with third parties, they did not see themselves as having consented. Facebook paid heavily for that error in judgment, and brands could easy find themselves caught in similar circumstances. Third-party data is already unreliable; under the right circumstances, it could also prove to be a brand safety risk.
Second, marketers should prioritize explicit over implicit first-party data—that is, information that a consumer willingly volunteers as opposed to information that is inferred from their behavior. Personalization using information that a consumer knows they have given is a value-add, but personalization using information that a consumer feels has been taken without their consent is creepy and unwelcome.
With all that said, a free and open exchange of consumer information only works if it is just that: an exchange. Marketers must actively create value for consumers in exchange for access to their information—or, backed by new data privacy laws, consumers are going to start revoking that privilege. Cambridge Analytica may seem like a critical flashpoint, but it’s really only one moment in a larger movement towards greater consumer privacy. That’s an unsettling proposition for marketers, but those who get it right stand to gain consumer trust at a time when it’s in short supply.