Political Lies, Behavioral Advertising, and the Complicity of Algorithms

PRIVACY PLUS+

Privacy, Technology and Perspective

Political Lies, Behavioral Advertising, and the Complicity of Algorithms. This week and for the past month, Facebook has faced heat for failing to police disinformation in political ads. In response, Facebook has generally taken the legally correct (albeit perhaps unfortunate) position that “political speech” falls under the protection of the First Amendment. “We don’t believe…that it’s an appropriate role for us to referee political debates and prevent a politician’s speech from reaching its audience….,” said Facebook's VP of Policy Nick Clegg in the following blog post:

https://newsroom.fb.com/news/2019/09/elections-and-political-speech/

For more, you can watch this uncomfortable video of Facebook CEO Mark Zuckerberg being questioned by Rep. Alexandria Ocasio-Cortez, D-N.Y.:

https://www.cnbc.com/2019/10/23/aoc-grills-zuckerberg-over-facebook-allowing-lies-in-political-ads.html

A compelling counterargument is that Facebook and other platforms don’t have to run paid ads that contain politicians’ lies; instead they chose to do so for profit, and thereby spread lies that lead to political donations and more lies. It appears in the following TechCrunch opinion post by Josh Constine:

https://techcrunch.com/2019/10/13/ban-facebook-campaign-ads/

We would add that Facebook isn’t just a neutral, impersonal public square where political (and other) advertisers reach an audience; rather Facebook, through its algorithms, plays its own, independent role in determining how ads are placed and therefore target certain audiences for those ads. It follows that Facebook cannot completely wash its hands of complicity in its advertisers’ misdeeds.

But something else is troubling –

In National Fair Housing Alliance et al v. Facebook, Inc., Case No. 18-cv-02689-JGK (S.D.N.Y.), which settled in March, the NFHA and affiliated parties sued Facebook, alleging that Facebook facilitated illegal housing discrimination by allowing housing advertisers to target (and avoid) certain groups of users such as women, disabled veterans, and single mothers. See the following link for a copy of the Complaint:

https://docs.justia.com/cases/federal/district-courts/new-york/nysdce/1:2018cv02689/490833/1

Weighing in on this lawsuit, the U.S. Department of Justice argued that because of its role in facilitating ad-targeting, Facebook could not rely on the defense presented by oft-cited Section 230 of the Communications Decency Act (immunizing platforms from certain speech). Rather, Facebook stepped over the line separating interactive platforms from content creators, because of its role in facilitating ad-targeting. It reportedly expounded, “Facebook goes beyond providing a blank slate for advertisers” and explained "Facebook’s ad utilities… invite housing providers to express unlawful demographic and other audience preferences." See the following link for more on the DOJ’s intervention and argument:

https://politicalmedia.com/content/doj-sides-against-facebook-argues-company-should-face-civil-rights-suit

Political ads may be different because of their First Amendment protection, but targeting lies to certain vulnerable audiences (and hiding those lies from others) seems to cross a line. Facebook (and the entire behavioral advertising industry) use users’ data to exploit users’ vulnerabilities and manipulate their behavior. Deceptive political ads may be as old as American elections themselves, but this is new: never before have candidates been able to target deceptive political ads – or our nation’s adversaries been able to target their own propaganda – directly to the eyes of those recipients who are most likely to be influenced by them, based on algorithms which have already analyzed those recipients’ behavior and know their hot buttons better than the recipients may know them themselves. This is brand-new to the American experience, and not readily addressed by our historic remedy of “more and better speech” to wash away the stains of “bad speech.”

This is also dangerous. Targeting deceptive political ads specifically through behavior-informed placement leads not to informed debate, but to fear, then to anger and beyond. Personalizing and targeting those ads so that they are seen by some people, but invisible to others leads to even deeper political polarization and a deeper, more depressing sense that we can each choose our own facts as well as our own “news.” Ultimately, it corrodes democracy itself.

Privacy interests are also at stake. Facebook seems to be engaged in practices that are inconsistent with the fair-information principles of obtaining informed consent and allowing users to opt out. In and around Facebook’s algorithms, there is almost no transparency about how personal data is processed – except that Facebook’s own “Data Policy” (aptly not called a “Privacy Policy”), a link to which follows, states that Facebook leverages user information “to help advertisers and other partners measure the effectiveness and distribution of their ads and services, and understand the types of people who use their services and how people interact with their websites, apps, and services”:

https://www.facebook.com/about/privacy

But how does this behavior advertising work? Why are some people served with political lies and others are not? And is there inherent bias? (We would love nothing more than to opt out of all behavioral advertising (or, even better, be permitted to opt into it (or not!).

If Facebook is going to hold itself out as the premier platform for interconnecting the whole world, and at the same time earn billions of dollars “helping advertisers understand the types of people who use their services,” it is incumbent on Facebook to address deceptive political advertising somehow. A good start might be look something similar to the NFHA settlement, the terms of which appear on following this links:

https://nationalfairhousing.org/facebook-settlement/

https://www.ecbalaw.com/wp-content/uploads/2019/03/Ex.-1-Settlement-Agreement-and-General-Release-with-Exhibit-A-dkt-66-2-3-19-19-00368742x9CCC2.pdf

That settlement suggests:

• Facebook could establish a separate advertising portal for political ads on Facebook, Instagram, and Messenger that will have limited targeting options.

• Facebook could create a page where Facebook users can search for and view all political ads that have been placed by advertisers, regardless of whether users have received the political ads on their News Feeds.

• Facebook could engage academics, researchers, civil society experts, and civil rights/liberties and privacy advocates (including plaintiffs) to study the potential for unintended bias in algorithmic modeling used by social media platforms.

While Facebook (and other platforms) have taken the position that they don’t want to be the arbiters of truth in campaign ads, they are profiting from targeted lies and the exploitation of user data.

That just isn’t right. And it’s not the American way.

Hosch & Morris, PLLC is a Dallas-based boutique law firm dedicated to data protection, privacy, the Internet and technology. Open the Future℠.

Previous
Previous

The High Cost of Groceries – Paying with Iris Scans

Next
Next

Kate Morris to speak on Privacy at Texas Minority Counsel Program on Thursday, 11/7