how to set a regulatory framework for government, tech

41
MASTER OF ARTS IN LAW & DIPLOMACY CAPSTONE PROJECT Digital Platforms, Content Moderation & Free Speech How To Set A Regulatory Framework for Government, Tech Companies & Civil Society By Adriana Lamirande Under Supervision of Dr. Carolyn Gideon Grant Awarded by Hitachi Center for Technology & International Affairs Spring 2021 | Submitted April 30 In fulfillment of MALD Capstone requirement

Upload: others

Post on 28-Mar-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

MASTER OF ARTS IN LAW & DIPLOMACY CAPSTONE PROJECT
Digital Platforms, Content Moderation & Free Speech How To Set A Regulatory Framework for Government, Tech
Companies & Civil Society
By Adriana Lamirande
Under Supervision of Dr. Carolyn Gideon Grant Awarded by Hitachi Center for Technology & International Affairs
Spring 2021 | Submitted April 30 In fulfillment of MALD Capstone requirement
1
II. BACKGROUND……………………………………………………………………………. 2
Social Media: From Public Squares to Dangerous Echo Chambers Algorithms as Megaphones Looking Forward: The Case for Regulation & Cross-Sectoral Collaboration
III. OVERVIEW OF ANALYTIC FRAMEWORK………………………………………………… 10
IV. EVIDENCE………………………………………………………………………………… 13
Public Interest Framework………………………………………………………. 13 Common Carrier Framework…………………………………………………….. 17 Free Market Framework…………………………………………………………. 22 International Human Rights Law Framework……………………………………. 29
V. CONCLUSION/POLICY RECOMMENDATIONS…………………………………………….. 35
2
RESEARCH QUESTION
Which content moderation regulatory approach (international human rights law, public interest, free market, common carrier) best minimizes disinformation and hate speech inciting violence on social media? Which practices by social media companies and civil society, alongside existing legislation, are best suited to guide U.S. policymakers?
BACKGROUND/CONTEXT
To borrow the words of Anne Applebaum and Peter Pomerantsev from Johns Hopkins’ SNF Agora Institute in The Atlantic: “We don’t have an internet based on our democratic values of openness, accountability, and respect for human rights.”1 Social Media: From Public Squares to Dangerous Echo Chambers Social media platforms have become digital public squares, creating a new arena for users to air opinions, share content they like or feel is informative (whether true or false), and express their unique worldviews without constraint. In the last few years, a slew of complaints and controversies have emerged regarding Facebook, YouTube and Twitter’s ad hoc content moderation practices, as well as the exploitative nature of their ad-based monetization business model. Their “growth at all costs” ethos is problematic in that it inordinately collects private user data to curate personalized news feeds and strengthens highly profitable precision ad targeting – the major caveat being that such a model thrives on content that is controversial, conflict-inducing and extreme in nature. The notion that “the medium is the message” was first pioneered by lauded communications expert Marshall McCluhan, and purports that the media through which we choose to communicate holds as much, if not more, value than the message itself. He states: “the personal and social consequences of any medium—that is, of any extension of ourselves—result from the new scale that is introduced into our affairs by each extension of ourselves, or by any new technology. [...] The restructuring of human work and association was shaped by the technique of fragmentation that is the essence of machine technology.”2 In our post-truth era, where platforms have become a stand in for traditional news media and are increasingly asked to arbitrate speech online, his warning about scale, fragmentation and social consequences feel especially prescient. Social networks struggle with waves of misinformation and problematic fact-checking practices and policies which can elevate news of poor quality. A Columbia
1Applebaum, Anne and Pomerantsev, Peter. “How to Put Out Democracy’s Dumpster Fire.” The Atlantic, March 8, 2021. https://bit.ly/3gQONAW 2McLuhan, Marshall. Understanding Media: The Extensions of Man. MIT Press, 1964, page 1. https://bit.ly/3aIkeXz
3
Journalism Review study3 found, for example, that Facebook failed to consistently label content flagged by its own third-party partners, and 50% of some 1,1000 posts containing debunked falsehoods were not labelled as such. Critics also point out that the fact-checking process is too slow, when information can reach millions in a matter of hours or even minutes. While digital platforms never set out to undermine or replace journalism, they have for many Americans become a primary source for news, a battleground for flaming partisan debates, and an unruly sphere where information – false or not – is transferred and elevated, with the potential for harmful impact beyond the web. According to a 2019 Pew Research Center report, 55% of U.S. adults now get their news from social media either "often" or "sometimes" – an 8% increase from the previous year. The report also found that 88% of Americans recognized that social media companies now have at least some control over the mix of the news that people see each day, with 62% of them feeling this was a problem and acknowledging companies having far too much control over this aspect of their lives.4 In the past, the news business and broadcast industries were built on stringent checks and balances by the government, and a foundation of mostly self-enforced professional integrity standards and editorial guidelines that provided recourse and due process for readers and critics alike. One example we can recall is the Fairness Doctrine, introduced by the Federal Communications Commission in 1949, which was a policy that required the holders of broadcast licenses to both present controversial issues of public importance and to do so in a manner that was—in the FCC's view—honest, equitable, and balanced. During this period, licensees were obliged not only to cover fairly the views of others, but also to refrain from expressing their own views. The Fairness Doctrine grew out of the belief that the limited number of broadcast frequencies available compelled the government to ensure that broadcasters did not use their stations simply as advocates of a single perspective. Such coverage had to also accurately reflect opposing views, and afford a reasonable opportunity for discussing contrasting points of view.5 This meant that programs on politics were encouraged to give opposing opinions equal time on the topic under discussion. Additionally, the rule mandated that broadcasters alert anyone subject to a personal attack in their programming and give them a chance to respond, and required any broadcasters who endorse political candidates to invite other candidates to respond.6 Though the Fairness Doctrine experienced erosions before this, it was officially repealed in 2011 after challenges on First Amendment grounds.7 This is an
3Bengani, Priyanjana and Karbal, Ian. “Five Days of Facebook Fact-Checking.” Columbia Journalism Review. October 30, 2020. https://bit.ly/2Rd0mYw 4 Grieco, Elizabeth and Shearer, Eliza. “Americans Are Wary of the Role Social Media Sites Play in Delivering the News.” Pew Research Center: Journalism & Media, October 2, 2019. https://pewrsr.ch/2W8n2rx 5 Perry, Audrey. “Fairness Doctrine.” The First Amendment Encyclopedia, May 2017. https://bit.ly/3eLm0ev 6 Matthews, Dylan. “Everything you need to know about the Fairness Doctrine in one post.” Washington Post, August 23, 2011. https://wapo.st/3bMV37v 7 McKenna, Alix. “FCC Repeals the Fairness Doctrine and Other Regulations.” The Regulatory Review. September 26, 2011. https://bit.ly/3sZbNAc
example about one type of mechanism that some suggest could be used to regulate social media content moderation practices today. Platforms enjoy the primacy and responsibility of mediating “the truth” once held by traditional news publishers, without the same formalized editorial intervention, at the expense of a filter-bubbled user experience and questionable news quality. Furthermore, the core ad monetization business model is intrinsically linked to the creation of siloed echo chambers, as algorithms elevate and personalize what posts users see based on their on-site activity. Experts assert that this limits people’s exposure to a wider range of ideas and reliable information, and eliminates serendipity altogether.8 By lauding neutrality in their role and policies, digital platforms are attempting to escape scrutiny of algorithmic bias that fuels and is complicit in the broadcasting of extremist views, disinformation, and hate speech inciting violence, thus enabling its spread at a quicker and more effective pace than level- headed reports and stories based in fact. One article around Facebook’s refusal to review political content – even if it violates its hate speech guidelines – summarizes the issue as such: “The fact check never gets as many shares as the incendiary claim.”9 It is impossible to figure out exactly how systems might be susceptible to algorithmic bias since the backend technology operates in a corporate “black box,” which prevents experts and lawmakers from investigating and determining how a particular algorithm was designed, what data helped build it, or how it works.10 Algorithms as Megaphones The internet and its communications networks were once imagined as a space to foster widespread citizen engagement, innovative collaboration, productive debate around political and social issues, and public interest information sharing. Now, weaponized by extremists and conspiracy theorists, companies’ loosely defined rules and disincentive to abandon a toxic business model renders their current practices an existential threat to society and democratic process, as hate speech inciting violence manifests into domestic terrorism, and disinformation plagues election integrity amongst other political stronghholds. As such, despite taking steps to clarify community guidelines and retool terms of service around defamatory language and false information, many critics deem PR statements from social media company leadership after-the-fact somewhat disingenuous and lacking a stark assessment of how algorithmic design, financialization that incentivizes bad behavior, and negligible moderation remain at work in the absence of a concrete digital rights regime and stringent regulation.
8Anderson, Janna and Rainie, Lee. “Theme 5: Algorithmic categorizations deepen divides.” Pew Research Center. February 8, 2017. https://pewrsr.ch/32YArX2 9 Constine, Josh. “Facebook promises not to stop politicians’ lies & hate.” TechCrunch, September 24, 2019. https://tcrn.ch/2xhih6J 10Heilweil, Rebecca. “Why algorithms can be racist and sexist.” Recode. February 18, 2020. https://bit.ly/3eGKcPe
5
One example came about in the midst of Facebook standing up its own Oversight Board, when a group of its most vocal critics formed the “Real” Oversight Board. Its intention was to analyze and critique Facebook's content moderation decisions, policies and other platform issues in the run-up to the presidential election and beyond. The expert body’s rationale is summed up in a quote from one member: “This is a real-time response from an authoritative group of experts to counter the spin Facebook is putting out."11 An April 2021 Buzzfeed investigation surfaced an internal report that found that Facebook failed to take appropriate action against the Stop the Steal movement ahead of the January 6 Capitol Hill riot, after which it repeated the refrain that it will “do better next time.”12 Harvard Shorenstein Center Research Director Joan Donovan said the report’s revelations and misleading public comments expose the true nature of the company and its products, stating that “it shows that they know the risks, and they know the harm that can be caused and they are not willing to do anything significant to stop it from happening again.” 13 Speaking to the real-life harms of organizing activity and capabilities on the platform, she says: “There is something about the way Facebook organizes groups that leads to massive public events. And when they’re organized on the basis of misinformation, hate, incitement, and harassment, we get very violent outcomes.”14 This is not the first high-profile instance where the platform failed to act and later issued a report doubling down on its commitment to address problematic content and reassess its approach to enforcement of its policies. It echoes previous high-profile examples, like a 2016 election disinformation postmortem, and a 2018 human rights report concluding it failed to stop Facebook from being leveraged to foment division and incite offline violence that helped fuel the Myanmar genocide. It’s not just Facebook. Digital scholar Zeynep Tufekci tracked the way YouTube’s recommendation algorithm serves as an engine of radicalization. She noticed that videos of Trump rallies led to videos of alt-right content, and that Hillary Clinton speeches eventually served up leftist conspiracies. As she widened her analysis, she found it wasn’t just politics. “Videos about vegetarianism led to videos about veganism. Videos about jogging led to videos about running ultramarathons. It seems as if you are never ‘hard core’ enough for YouTube’s recommendation algorithm. It promotes, recommends and disseminates videos in a manner that appears to constantly up the stakes.”15 Looking Forward: The Case for Regulation & Cross-Sectoral Collaboration 11Solon, Olivia. “While Facebook works to create an oversight board, industry experts formed their own.” NBC News, September 25, 2020. https://nbcnews.to/3uaTx8k 12Lytvynenko, Jane; Mac, Ryan and Silverman, Craig. “Facebook Knows It Was Used To Help Incite The Capitol Insurrection.” Buzzfeed, April 22, 2021. https://bit.ly/2R5PnzX 13Lytvynenko, Jane; Mac, Ryan and Silverman, Craig. “Facebook Knows It Was Used To Help Incite The Capitol Insurrection.” Buzzfeed, April 22, 2021. https://bit.ly/2R5PnzX 14ibid 15Klein, Ezra. Why We’re Polarized. Avid Reader Press / Simon & Schuster (January 28, 2020), page 156
6
The societal implications of social media as they concern free speech are clear but sensitive, as the First Amendment’s cultural and legislative power hovers heavily over all considerations. Another important roadblock is that the technologies underpinning curation and moderation remain poorly understood given they operate as black boxes, wherein they can be viewed in terms of inputs and outputs but without any knowledge of their internal workings. Harvard Business Review contributors Theo Lau and Uday Akkaraju succinctly summarize this conundrum: “When we type a query into a search engine, the results are determined and ranked based on what is deemed to be “useful” and “relevant.” What if they decide whose voice is prioritized? What if, instead of a public square where free speech flourishes, the internet becomes a guarded space where only a select group of individuals get heard — and our society in turn gets shaped by those voices?”16 The public and the government are aware that data collection helps algorithms determine what will capture the most eyeballs in today’s “attention economy” – keeping users scrolling, clicking and sharing. But neither have a clear view of how they are trained to flatten identity and opinion into manageable labels, thus reinforcing biases, segregating individuals into self-perpetuating echo chambers, and shaping public opinion with serious consequences. A 2020 Gallup/Knight survey17 indicates that while users believe that online platforms are important places of open expression, they have gotten warier about the ways companies distribute misleading public health information, election disinformation, bigoted trolling and other harmful content. As Sunstein suggests in “#republic: Divided Democracy in the Age of Social Media,” the ease with which users with fringe ideals spanning racism, sexism and homophobia can find their niche is cause for concern. Facilitated by platform architecture to grow a captive audience and gain the opportunity to go “viral,” an extreme groupthink mindset forms. As we’ve seen through reports of mass shooters across the U.S., posting manifestos on social media and garnering support from fellow incels emboldened them to perform destructive acts in real life. Anonymity only adds fuel to the fire that networked technologies present – evaporating societal barriers to discriminatory threats, hateful or defamatory language, and false information around issues of public interest.18 Media scholar Jonathan Albright coined the term “Untrue-Tube” in his referencing YouTube’s primacy in the disinformation space. Albright notes that the video service’s recommendation system – deemed the best in the world – allows content creators to monetize harmful material while benefiting from the boost that comes along with the system’s high visibility potential. While he cautions against censorship,
16Lau, Theodora and Akkaraju, Uday. “When Algorithms Decide Whose Voices Will Be Heard.” Harvard Business Review, Nov. 12, 2019. https://bit.ly/2xhU5B3 17“The future of tech policy: American views.” Knight Foundation, June 16, 2020, https://kng.ht/3gNe3rF 18Sunstein, Cass R. #republic: Divided Democracy in the Age of Social Media. Princeton University Press, 2018, page 185.
7
he agrees that policies must be put in place to include optional filters and increase the number of human moderators scrutinizing potentially dangerous videos and imagery.19 Social media companies have recognized their role in providing platforms for speech. In a 2018 hearing before the Senate Select Committee on Intelligence, Twitter CEO Jack Dorsey, repeatedly referred to Twitter as a “digital public square,” emphasizing the importance of “free and open exchange” on the platform.20 Alongside legislation, we are seeing initiatives and standardizing being encouraged by civil society groups, and private companies have made some strides to stand up new oversight bodies and moderation features internally in the face of proliferating disinformation and hate speech inciting violence online. Such efforts are based on a growing awareness that this paradigm shift in information sharing urgently requires some regulatory oversight. Researcher and Founding Director of Ranking Digital Rights (RDR) Rebecca MacKinnon asserts that “a clear and consistent policy environment that supports civil rights objectives and is compatible with human rights standards is essential to ensure that the digital public sphere evolves in a way that genuinely protects free speech and advances social justice.”21 Companies have for long toed the line between rejecting and inviting regulation. For years, Facebook lobbied governments against imposing tough regulations, warning that they would harm their business model. Recently, we have seen some reversal of this position, with Big Tech increasingly pleading for new rules for the good of its business – and to regain user trust. Last March, Zuckerberg penned an open letter in the Washington Post calling for government intervention to delineate a standardized approach for content review systems at scale, and to set baselines around which companies can measure the efficacy and consistency of their practices.22 In a white paper published in February, he and his team detailed a push for internet regulation, specifically calling on lawmakers to devise rules around harmful content, a different model for platforms’ legal liability and a “new type of regulator” to oversee enforcement of harmful content amongst other areas. In addition, the company would consider unlocking content moderation systems for external audit to help governments better design regulation in areas like hate speech.23
19Albright, Jonathan. “Untrue-Tube: Monetizing Misery and Disinformation.” Feb. 25, 2018. http://bit.ly/31Nmytg 20Brannon, Valerie. “Free Speech and the Regulation of Social Media Content.” Congressional Research Service, March 27, 2019, page 5. https://bit.ly/334dDVX 21MacKinnon, Rebecca. “Reclaiming Free Speech for Democracy and Human Rights in a Digitally Networked World.” University of California National Center for Free Speech and Civic Engagement. 2019-2020. https://bit.ly/2SdkBpj 22Zuckerberg, Mark. “Mark Zuckerberg: The Internet needs new rules. Let’s start in these four areas.” Washington Post, March 30, 2019. https://wapo.st/2PwFic1 23Drozdiak, Natalia. “Facebook Needs Regulation to Win User Trust, Zuckerberg Says.” Bloomberg, February 17, 2020. https://bloom.bg/2VLVv0j
8
The creation of a Facebook Oversight Board demonstrates Zuckerberg’s willingness to grapple with these difficult issues, but he has come up against claims of partiality in giving the company too much power in nominating a majority of the “external” body’s board members.24 To this point, pessimism about his and other leaders’ appeals to the government for increased rules and standards is valid, as internet platforms have failed to rise to such occasions before. According to New America’s Open Tech Institute assessment around the Santa Clara Principles on Transparency and Accountability Around Online Content Moderation,25 findings indicate that although Facebook, YouTube and Twitter have demonstrated progress in implementing the recommendations related to “notice” and “appeals,” they have reneged on their commitment to disclose the numbers of posts removed and accounts permanently or temporarily suspended due to content guideline violations.26
As things stand, private internet companies remain enigmatic self-regulators and rely on ad hoc “platform law” where consistency, accountability and remedy are non-existent. As such, our best recourse in regaining control over lawless abuses of free speech and healthy dialogue in internet forums is deploying federal regulation. Such policies to combat hate speech and disinformation chipping away at democratic deliberation online should push for transparency around moderation and curation practices, accountability around mistakes made and commitments to amend internal decision-making around accordingly, and impose sanctions on private entities unwilling to comply with authorities over editorial misdeeds and abuses like free speech violations.
This paper will outline which framework – public interest, common carrier, free market, international human rights law – is best suited to help minimize disinformation and hate speech inciting violence. This analysis will necessarily require weighing which one best protects the fundamental right of free speech while at the same time reducing harmful content. Based on evidence gathered, I will map suggestions to begin building an effective and actionable regulatory framework for internet governance – grounded in best practices for private companies, civil society groups and U.S. lawmakers, weaving in existing legislation, and proposing new protocols to shape a stronger and more effective digital social contract for all.
ANALYTIC FRAMEWORK OVERVIEW
In this section I will provide an overview of the analytic framework designed to help structure my thinking, that I am using as a model to guide and facilitate my understanding of evidence for each possible answer. This paper will consider the following four possible answers to my research question.
24Ingram, Matthew. “Facebook lays out the rules for its new Supreme Court for content.” Columbia Journalism Review, January 30, 2020. https://bit.ly/3aQipYl 25Santa Clara Principles on Transparency and Accountability Around Online Content Moderation landing page. Accessed April 29, 2021. https://santaclaraprinciples.org/ 26Singh, Spandana. “Assessing YouTube, Facebook and Twitter’s Content Takedown Policies.” New America Open Technology Institute, May 7, 2019. https://bit.ly/2KNBbVH
9
First is the public interest framework, which advocates for guiding the use and regulation of scarce resources for the public good, in order to prevent what were traditionally broadcast licensees, and today are social media platforms acting as publishers, from taking advantage of their dominant position in disseminating information for-profit in the “marketplace of ideas.” The Fairness Doctrine has been one mechanism facilitating this approach, as it required broadcasters to offer equal time to and balanced perspectives on important civic issues. Another is Section 230 of the Communications Decency Act of 1996, which is today under consideration for amendment in order for its statute to apply to social media companies (which it currently does not). Second is the common carrier framework, which advocates for regulating social media companies like common carriers and public utilities. Notably, this would mean they would be subject to non- discrimination clauses and the principle of net neutrality, wherein they are prohibited from speeding up, slowing down, or blocking any content, applications or websites customers want to use. Third is the free market framework, which is the dominant approach today, wherein digital platforms self-regulate in both the absence of and to avoid government intrusion.27 Mechanisms here include internally-set Terms of Service and Community Guidelines, and “third party” bodies like Facebook’s Oversight Board and Twitter’s Birdwatch acting as arbiters for tough takedown cases. Fourth is the international human rights law framework, which suggests applying the lens of globally- ratified human rights norms and values, namely the right to free speech, to private companies’ content moderation policies. Central to this approach is its ability to grapple with issues that touch free speech and public debate, its inherent balancing of free speech with other fundamental rights, and myriad documents drafted to help guide businesses in how to best uphold human rights. While the public interest framework has seen the creation of valuable indices to measure companies’ policies and reporting practices (or lack thereof) by civil society groups, these bodies lack a crucial incentivizing mechanism to ensure companies abide by their recommendations, and are unable to hold them accountable in any official or financial capacity. On the policy side, arguments against applying the Fairness Doctrine and debates around amending Section 230 remain ongoing, but some of the myriad proposals to update the statute offer promising provisions to bring it into the social media age. Taking all of this into consideration, civil society experts and scholars analyzing and advocating for policies that could help limit hate speech and disinformation online remain the backbone of the content moderation regulation conversation, and have put forth both reports to expose company misbehavior, and increasingly have the ear of U.S. policymakers across the aisle. Their role should be to act both as educators and guides to U.S. policymakers arguably still lacking broad technical knowledge, and liaise
27Lotz, Amanda. “Profit, not free speech, governs media companies’ decisions on controversy.” The Conversation. August 10, 2018. https://bit.ly/3u9EOuc
10
with private companies to encourage the reconsideration and modification of internal policies that remain opaque and overly permissive. The common carrier framework grapples with the benefits and pitfalls of classifying social media as a common carrier or public utility, and is rife with disagreement amongst scholars. Some academics are proponents of this approach while others argue that the framework traditionally applied to broadcast (wherein the broadcast spectrum is limited) is not appropriate for social media – which fosters a wide array of information sources, is free to access to all sorts of users, and is not restricted by a similar short supply predicament. Free market solutions and self-regulatory measures put forth by companies in the absence of formal regulation – what is here classified as the free market framework – continue to fall short. Some critics view internal updates to tackle problematic posts and punting the most egregious cases to third party authorities as a tactic to further shed responsibility and reckoning with what most experts see as the major element being glossed over: the financial incentives behind the core business model. With that said, some free market solutions should not be total throwaways. Facebook tapping subject matter experts with its Oversight Board has broadened the public’s view into the types of cases the platform puts under review with some net positive and actionable next steps. Its rulings can certainly supplement growing calls for more accountability and transparency measures around content moderation decision making. Twitter’s Birdwatch tool, though still in its early rollout, is a valuable experiment in measuring if crowdsourced moderation could work as a potential model to be deployed platform-wide. Ultimately, the more tools to attempt to solve for this tricky problem, the better. The international human rights framework is too broadly applied to states who have ratified related treaties, so nailing down how they should be interpreted and applied specifically by global private companies lacking subject matter expertise seems both an arduous, and ultimately pointless, task. But, the Guiding Principles on Business and Human Rights28 propose meaningful steps companies can adopt to improve their human rights record, so this document should continue to be referenced in these discussions. In short, there is no right or perfect framework to probe this complex, and ever changing, challenge. I will suggest that a combination of the public interest and free market frameworks is best suited to envision effective regulation for social media content moderation. I will venture to say that U.S. policymakers – seeking guidance from digital rights experts, grappling with Section 230 amendment, and recognizing the merits of platforms’ internal strides to tackle their difficult role – likely feel the same. In order to evaluate which attributes make up each framework, I will analyze each by exploring how it stacks up to three key dimensions. One dimension is its efficacy in removing harmful content,
28Guiding Principles on Business and Human Rights, UN Human Rights Office of the High Commissioner. https://bit.ly/2yR2kog
11
understood as falling into the categories of disinformation and hate speech inciting violence. Another is its free speech protection. A final one is its implementability and enforceability. To expand upon these considerations, I will begin by setting up the landscape of existing approaches. From there, I will break down what each framework does to piece together the complex puzzle of regulating or self-regulating social media content moderation, which will touch upon what is working and what is not, and outline how all these mechanisms interact. Finally, I will leverage all of the above analysis to inform a concluding argument that will specify policy recommendations directed at both U.S. lawmakers and social media companies.
ANALYSIS OF FRAMEWORKS
PUBLIC INTEREST FRAMEWORK
12
The public interest framework is primarily concerned with protecting society’s values that are at risk of being lost if we rely solely on the free market approach. Specifically in the context of information and communications technologies, it is concerned with the dissemination of accurate information, how its flows can have an impact beyond the digital sphere, and the consequences of relying on automation to moderate content in our complex and ever-evolving “informationscapes.” Related recommendations and pushes for legislation have been centered around developing safeguards to address what is seen as a moderation crisis that undermines values like free speech and democratic processes at a time of heightened political polarization, which may lead to a new era of public oversight of private companies according to a public interest standard.29 To this end, the proponents of this approach argue that companies’ market dominance has led to excessive influence over the political and public sphere, with poor outcomes for users. U.S. lawmakers and civil society groups have urged the examination of platforms’ core business model and inner workings of content moderation, called on them to produce transparency reports that include details about content blocking and removal, as well as provide access to internal data so researchers can study how algorithmic design may be driving substandard outcomes in both minimizing disinformation and hate speech inciting violence that sometimes translates to negative consequences and violent events in real life. There is wide consensus amongst civil liberties and digital rights groups that platforms like Facebook appeal to free speech principles only when they are economically advantageous30, and that platforms rely on techno solutionism and internal standard-setting that net suboptimal results, and are deemed overdue but inadequate. Color of Change’s vice president Arisha Hatch said in a statement: “This is progress, but Twitter demonstrated a consequential lack of urgency in implementing the updated policy before the most fraught election cycle in modern history, despite repeated warnings by civil rights advocates and human rights organizations.”31 So, what public interest mechanisms have been proposed by the U.S. government and civil society groups that we may consider applying to digital platforms to fill the vacuum left by current free market policies in place today? Which can best address public interest goals and limit cesspools of hate and disinformation online? Section 230 of the Communications Decency Act says that an “interactive computer service” can’t be treated as the publisher or speaker of third-party content.32 The Electronic Frontier Foundation calls it
29Matzko, Paul and Samples, John. “Social Media Regulation in the Public Interest: Some Lessons from History.” Knight First Amendment Institute at Columbia University. May 2020. https://bit.ly/3xCsoNA 30Solon, Olivia. “‘Facebook doesn't care': Activists say accounts removed despite Zuckerberg's free-speech stance.” NBC News. June 15, 2020. https://nbcnews.to/3vq2MS8 31Klar, Rebecca. “Twitter, Facebook to update hate speech moderation.” The Hill. December 30, 2020. https://bit.ly/3xATapK 32Casey Newton. “Everything You Need to Know About Section 230.” The Verge. May 28, 2020. https://bit.ly/344JxTh
13
“the most important law protecting internet speech.”33Created before the advent of social media, critics fear it protects companies while enabling real harm to their users. EFF defines Section 230’s purpose as helping prevent overcensorship by online intermediaries that host or republish speech against laws that may otherwise be used to hold them legally responsible for what their users and third parties using their services say and do. Without Section 230, and rather than face potential liability for their users' actions, most would likely not host any user content at all or would need to protect themselves by being actively engaged in censoring what we say, see, and do online.34 There have been many congressional proposals to amend Section 230 or ratify it completely.35 Bipartisan at its origin, the law has been singled out and scrutinized across the aisle. Senator Ted Cruz describes it as “a subsidy, a perk” for Big Tech, and Speaker Nancy Pelosi calls it a “gift” to tech companies “that could be removed.”36 More broadly, Democrats assert it allows tech companies to get away with not moderating content enough, while Republicans proclaim it enables them to moderate too much.37 Because a flurry of legislative reforms have been put forth, Future Tense, the Tech, Law, & Security Program at the Washington College of Law at American University, and the Center on Science & Technology Policy at Duke University partnered on a project to track all of them starting in 2020. The bipartisan Platform Accountability and Consumer Transparency (PACT) Act38, introduced by U.S. Senators Schatz and Thune in June 2020, is one proposal to update Section 230. Though contentious for its thorny treatment of court orders around illegal content, it puts forth worthwhile requirements for transparency, accountability, and user protections. This includes an easy-to-understand disclosure of moderation guidelines, which remain opaque, which was one roadblock discussed during the 2019 sessions on platform transparency at the Transatlantic Working Group on Content Moderation and Free Expression.39 Additionally, platforms would have to explain their reasoning behind content removal decisions, and explain clearly how a removed post violated terms of use. Lastly, the act would create a system for users to appeal or file complaints around content takedowns. Digital rights organization Access Now has called it the most reasonable proposal put forth thus far, but acknowledge it is not a complete or perfect solution, but a few of its clauses offer a good start.40 Daphne Keller, the Director of the Program on Platform Regulation at Stanford's Cyber Policy Center, deems it an “intellectually serious effort to grapple with the operational challenges of
33Jason Kelley. “Section 230 is Good, Actually.” EFF. December 3, 2020. https://bit.ly/3ggZsT7 34“Section 230 of the Communications Decency Act.” EFF. Accessed on April 29, 2021. https://www.eff.org/issues/cda230 35Jeevanjee, Kiran et al. “All the Ways Congress Wants to Change Section 230.” Slate. March 23, 2021. https://bit.ly/3gMi2VD 36Wakabayashi, Daisuke.“Legal Shield for Social Media Is Targeted by Lawmakers.” New York Times. October 28, 2020. https://nyti.ms/39LFWNx 37Laslo, Matt. “The Fight Over Section 230—and the Internet as We Know It.” WIRED. August 13, 2019. https://bit.ly/36NhysO 38Sen. Schatz, Brian. S.4066 - PACT Act. Congress.gov. June 24, 2020.https://bit.ly/3u6a7pU 39MacCarthy, Mark. “How online platform transparency can improve content moderation and algorithmic performance.” Brookings. February 17, 2021. https://brook.gs/3aRGmkX 40“Unpacking the PACT Act.” Access Now. September 21, 2020. https://bit.ly/2SetYVU
14
content moderation at the enormous scale of the internet. [...] We should welcome PACT as a vehicle for serious, rational debate on these difficult issues.” However, she is still grappling with some of its provisions and logistics, namely what the First Amendment ramifications of its FTC consumer protection model for Terms of Service-based content moderation would be.41 In tackling hate speech online, this is especially prescient considering it is not technically illegal under the First Amendment, barring narrow exceptions42 such as threats of illegal conduct or incitement intended to and likely to produce imminent illegal conduct (i.e. incitement to imminent lawless action).43 Scholars Danielle Citron and Benjamin Wittes have offered what they present as a broader though balanced fix, wherein platforms would enjoy immunity from Section 230 liability if they can show that their response to unlawful uses of their services is reasonable. Their revision to the statute is pasted below for reference:44 No provider or user of an interactive computer service that takes reasonable steps to prevent or address unlawful uses of its services shall be treated as the publisher or speaker of any information provided by another information content provider in any action arising out of the publication of content provided by that information content provider. What constitutes a “reasonable standard of care” would take into account social networks that have millions of posts a day that cannot realistically respond to all complaints of abuse within a short time span. But this clause could help push for the deployment of technologies to detect content previously deemed unlawful as violations under platforms’ Terms of Service. The FCC’s Fairness Doctrine45 is another mechanism that had been previously applied to U.S. broadcasters, which required them to present a balanced range of perspectives on issues of public interest. Former President Trump’s 2020 Executive Order on Preventing Online Censorship46 called on the Department of Justice to “assess whether any online platforms are problematic vehicles for government speech due to viewpoint discrimination,” which suggested that private social media companies should be compelled to serve as viewpoint-neutral vehicles for dissemination of “government speech.” Around this time, Senator Hawley introduced S.1914, a bill that would have
41Keller, Daphne. “CDA 230 Reform Grows Up: The PACT Act Has Problems, But It’s Talking About The Right Things.” Stanford Law School Center for Internet & Society. July 16, 2020. https://stanford.io/3sZj03e 42“Which Types of Speech Are Not Protected by the First Amendment?” Freedom Forum Institute. Accessed April 29, 2021. https://bit.ly/3aPKMZL 43Staff. “Factbox: When can free speech be restricted in the United States?” Reuters. August 14, 2017. https://reut.rs/330FIgH 44 Citron, Danielle Keats and Wittes, Benjamin. “The Problem Isn't Just Backpage: Revising Section 230 Immunity.” Georgetown Law Technology Review 453, U of Maryland Legal Studies Research Paper No. July 23, 2018. Available at SSRN: https://ssrn.com/abstract=3218521 45Ruane, Kathleen. “Fairness Doctrine: History and Constitutional Issues.” Congressional Research Service. July 13, 2011. https://bit.ly/2SflDkN 46Executive Order 13925, “Preventing Online Censorship.” May 28, 2020. https://bit.ly/3xuvrHP
15
amended Section 230 so that “big tech companies would have to prove to the FTC by clear and convincing evidence that their algorithms and content-removal practices are politically neutral.” There is nothing in Section 230 that requires social platforms that host third party and user-generated content to be viewpoint neutral. Brookings Nonresident Senior Fellow of Governance Studies for the Center for Technology Innovation John Villasenor47 puts forth the argument that today’s internet ecosystem enables access to a wide variety and diverse range of information sources and viewpoints (versus the limited broadcast spectrum system traditional broadcast media operated within). Furthermore, given platforms are private companies and thus not bound by the First Amendment, requiring them to be “politically neutral” would be a constitutional violation for platforms that are free to welcome and preferentially make decisions around a diverse range of perspectives spanning political ideologies. Additionally, DC think tank New America’s Ranking Digital Rights compiles an annual comprehensive Corporate Accountability Index to evaluate and rank the world’s most powerful digital platforms and telecommunications companies on their disclosed policies and practices affecting users’ digital rights like freedom of expression and privacy.48 The hope is that this could be a primary vehicle to leverage its breadth of public interest research, evaluate how transparent tech companies are about their policies and practices in comparison with their peers, establish a baseline against which to measure their commitment to digital rights, and push companies to improve if and how they uphold such obligations. Its analysts comb through thousands of internal documents to learn how each platform enforces its policies, how accessible they are, and how they interact with governments and other third parties. Finally, some scholars including Ethan Zuckerman have proposed mapping a public service-minded digital media alternative to the Facebooks and Twitters of the world.49 But this could take years, and there’s no guarantee that alternative networks would be able to pierce through the crowded media environment, or that users would make the switch, considering Facebook currently has around 2.74 billion active users YouTube around 2.29 billion, and Twitter around 350 million globally. In Social Media and the Public Interest: Media Regulation in the Disinformation Age, Duke Public Policy Professor Philip M. Napoli argues that a social media–driven news ecosystem represents a case of market failure in what he calls the algorithmic marketplace of ideas.50 To respond, he believes we need to rethink fundamental elements of media governance based on a revitalized concept of the public interest. Some of the bipartisan proposals put forth to amend Section
47Villasenor, John. “Why creating an internet “fairness doctrine” would backfire.” Brookings. June 24, 2020. https://brook.gs/3vn8JiW 482020 Ranking Digital Rights Corporate Accountability Index landing page. Accessed April 29, 2021. https://bit.ly/3aU6TOz 49Zuckerman, Ethan. “The Case for Digital Public Infrastructure.” Knight First Amendment Institute at Columbia University. January 17, 2020. https://bit.ly/3vvAZQq 50Napoli, Philip M. Social Media and the Public Interest, New York Chichester, West Sussex: Columbia University Press, 2019. https://doi.org/10.7312/napo18454
230 in order for it to be applicable to social media companies, as well as the reimagining of the digital public square by scholars touting the benefits and building blocks of alternative social networks, demonstrate we are well on our way. Based on the discussion above, it is reasonable to assume that Section 230 amendment, taking into account both certain tenets of the PACT Act as well as Citron and Wittes’ perspective that platforms should be able to prove a “reasonable standard of care,” are the best avenues forward to minimize harmful content and preserve free speech under the public interest framework. While implementation remains up in the air, we know Congressional Democrats have begun discussions with the White House on ways to crack down on Big Tech, including around the best ways to hold social media companies accountability for the spread of disinformation, hate speech and information-sharing that led to events like the Capitol riot. During his candidacy, President Biden called for revoking Section 230 altogether, but much legislation on the table is concerned with amending rather than repealing the statute.51 On the enforcement front, there is still uncertainty in discussions around the FCC and FTC’s authority to interpret and enforce Section 230 provisions.52 Some suggest it may be best to leave oversight of digital platforms and related issues to a new more specialized digital regulatory agency.53
COMMON CARRIER FRAMEWORK A common carrier is a company that transports goods or services, like enabling communication, and is responsible for those goods or services during transport. In the U.S. and for the purposes of exploring this research question, the term can refer to telecommunications service providers and public utilities, whose business is affected with a public interest.54 The term “telecommunications” means the transmission, between or among points specified by the user, of information of the user’s choosing, without change in the form or content of the information as sent and received.55 The FCC classifies internet service providers (ISPs), like Comcast, as common carriers, for the purpose of enforcing net neutrality. Net neutrality is the basic principle that prohibits internet service providers like AT&T, Comcast and Verizon from speeding up, slowing down or blocking any content, applications or websites you want to use.56
51Bose, Nandita and Renshaw, Jarrett. “Exclusive: Big Tech's Democratic critics discuss ways to strike back with White House.” Reuters. February 17, 2021. https://reut.rs/3eKsutZ 52Brannon, Valerie et al. “UPDATE: Section 230 and the Executive Order on Preventing Online Censorship.” Congressional Research Service Legal Sidebar. October 16, 2020. https://bit.ly/3nNfKal 53Kimmelman, Gene. “Key Elements and Functions of a New Digital Regulatory Agency.” Public Knowledge. February 13, 2020. https://bit.ly/3aSS4fh 54Telecommunications common carrier definition. Law Insider. https://bit.ly/335Ael4 55“Basic Service / Telecommunications Service.” Cybertelecom Federal Internet Law & Policy, An Educational Project. Accessed April 29, 2021. https://bit.ly/3nyQxQu 56“The internet without Net Neutrality isn’t really the internet.” Free Press. Accessed April 29, 2021. https://bit.ly/32YNmbl
17
A key tenet of common carrier is that it must provide non-discriminatory service. This means that service cannot be denied for any legal content or purpose, and while there can exist different tiers and accompanying pricing rates, service at each tier must be provided to those who pay for it. Non- discrimination regulations essentially prohibit common carriers from making individualized or case-by- case decisions with respect to the terms upon which they provide their services.57 In looking at what regulatory frameworks would best be applied to digital platforms like Facebook, Twitter and YouTube, classifying them as common carriers or public utilities in order to regulate them accordingly has been floated by some, though most experts caution against its applicability in the digital platform context. While some services are required to be common carriers – like telephone and text messaging – the characteristics of being treated like one are significant. As such, it is necessary to ask whether common carrier regulation would be beneficial in regards to social media platforms, or if other frameworks are better suited to helping minimize hate speech and disinformation online. An important legal requirement for a common carrier is that it cannot discriminate against a customer or refuse service unless there is some compelling reason. This could make it a requirement that networks not demonstrate “bias” against certain viewpoints. In practice, this means that all legal content must be treated in a non-discriminatory manner, and all users who are engaging with or generating content must be treated the same. Earlier this month, Supreme Court Justice Clarence Thomas put forth an opinion supporting the common carrier approach for regulating social media content around the Court’s decision to dismiss a lawsuit against former President Trump over his blocking of some Twitter followers. He cited the Turner Broadcasting case58 that required cable operators to carry broadcast signals, which he argued might also apply to digital platforms. In short, he provided a response to a First Amendment challenge to the common carrier framework, which says that social media platforms should not be treated as speakers but also should not have the right to decide what is said on their sites. Rather, they should be “reconceptualized as neutral, passive conveyors of the speech of others.” Mark MacCarthy, a Nonresident Senior Fellow of Governance Studies for the Center for Technology Innovation at Brookings, outlines the response of experts and stakeholders to Justice Thomas’ opinion on the left and the right. Conservatives concerned with social media censorship applauded it.59 Some scholars on the left also endorse the idea, with law professors Genevieve Lakier and Nelson Tebbe,
57See 47 U.S.C. § 201; see also Report to Congress, FCC, CC Docket No. 96-45, FCC 98-37 (Apr. 10, 1998), at 8, 37-41, available at http://transition.fcc.gov/Bureaus/Common_Carrier/Reports/fcc98067.pdf 58“Turner Broadcasting System, Inc. v. FCC, 512 U.S. 622 (1994).” Justia US Supreme Court. Accessed April 29, 2021. https://bit.ly/3t5NL6G 59MacCarthy, Mark. “Justice Thomas sends a message on social media regulation.” Brookings. April 9, 2021. https://brook.gs/3u9FfEX
18
arguing users have a constitutional right to carriage on social media needed to counteract “the threats to freedom of speech that result from private control of the mass public sphere.”60 But a common carrier framework is still not necessarily considered the path to follow by other experts. In a response to Lakier and Tebbe, First Amendment scholar Robert Post notes that treating social platforms as common carriers would mean they would be “compelled to broadcast intolerable and oppressive forms of speech” and that such a move would invalidate existing minimal content moderation practices, exacerbating issues around harmful but legal communication like disinformation and hate speech that we grapple with in the digital public sphere.61 Following the same line of thought, Public Knowledge Legal Director John Bergmayer – who specializes in telecommunications, media and internet issues – does not think “must carry” requirements are necessary for social networks.62 In response to those on both sides of the aisle who think platforms should default to leaving technically legal content up and give leaders like former President Trump a platform for public interest reasons (access to his thoughts on policy, etc.), Bergmayer argues that the law should not require platforms to carry all user-generated content indifferently, and cautions against unmoderated speech platforms solely focused on removing illegal content. Mechanisms that uphold arguments favoring the imposition of a common carrier framework include natural monopoly, wherein a company makes it very difficult for competitors to enter a marketplace. Bergmayer asserts that this dynamic does not apply to social media networks that are offered to end users for free and where similar information and communication technologies can be repurposed and replicated into alternative social networks. While major platforms control access to services to their users, they are not the sole providers of communication and content generation online. Unlike the smaller number of ISPs, users can go elsewhere to seek such services out as needed. Additionally, even if an existing social media platform denies a competitor use of its "facility," competitors can essentially and relatively easily duplicate such platforms, wherein the accompanying challenge is more centered on building a comparable user base, rather than struggling to build the “physical” digital infrastructure of a social network. Parler stepping in to fill the vacuum for users and accounts removed from Twitter and Facebook is one example of this process in action, as it saw downloads surge after the Big Tech players restricted groups and posts peddling false election claims, and banning Trump.63 60Lakier, Genevieve and Tebbe, Nelson. “After the Great Deplatforming”: Reconsidering the Shape of the First Amendment.” Law and Political Economy (LPE) Project. March 1, 2021. https://bit.ly/3t5pP3g 61Post, Robert. “Exit, Voice and the First Amendment Treatment of Social Media.” Law and Political Economy (LPE) Project. April 6, 2021. https://bit.ly/2Rg3W4c 62Bergmayer, John. “What Makes a Common Carrier, and What Doesn’t.” Public Knowledge. January 14, 2021. https://bit.ly/2QBrxfI 63Dwoskin, Elizabeth and Lerman, Rachel. “‘Stop the Steal’ supporters, restrained by Facebook, turn to Parler to peddle false election claims.” Washington Post. November 13, 2020. https://wapo.st/3xyixbD
19
Another element to consider in the common carrier framework is that of network effects, for which the historical example is the telephone system. In short, it designates the phenomenon where networks become more valuable as more people use them. Law.com defines network effects as driving both speakers and listeners to be in the same place where everybody else is in order to reach the broadest audience and to access the broadest range of content. Another consideration of network effects is that the owners control access to the ability to broadcast to a mass audience or to reach a niche one.64 This could be said in the case of social media, where a breadth of information is classified and categorized by algorithms, that feed personalized content back to users based on their browsing and engagement patterns. However, even though an alternative platform can emerge to serve a user who is denied access to one of the large mainstream and established platforms, the user experience will differ starkly because it is likely unable to offer the same scale of content and massive audience, which ties back to the concept of network effects. Bergmayer concludes that the common carrier framework would not bring forth net positive results for regulating social networks, as unmoderated platforms would become oversaturated with low quality content like abuse and spam, and enable even more ease for groups to organize for mass violence purposes without oversight or fear of retribution. Renowned researcher danah boyd contends that Facebook is acquiring some public utility characteristics, though it is still not at the scale of the internet, and suggests that regulation may be in its future.65 In comparing social media platforms to traditional public utilities, Senior Research Fellow at Mercatus Center at George Mason University’s Mercatus Center Adam Thierer warns that treating nascent digital platforms as such would ultimately harm consumer welfare for a few key reasons. He sees public utility regulation as the “archenemy of innovation and competition.”66 Additionally, Thierer believes that calling social media natural monopolies would turn into a self-fulfilling prophecy. Finally, given social media are tied up with the production and dissemination of speech and expression, First Amendment values are implicated though they do not technically apply to the private Big Tech companies. Thus, platforms are expected to retain the editorial discretion to determine what can appear on their sites.67 Given this is the case and that they hold a growing role in public discourse, it is no surprise that academics, digital rights advocates and lawmakers are reviewing content policies crafted internally closely to determine whether certain cases could be considered to amount to censorship.
64Law Journal Editorial Board. “Are Social Media Companies Common Carriers?” Law.com. March 14, 2021. https://bit.ly/3xyiHQh 65boyd, danah. “Facebook Is a Utility; Utilities Get Regulated.” ZEPHORIA. May 15, 2010. https://bit.ly/3gPo9s9 66Thierer, Adam. “The Perils of Classifying Social Media Platforms as Public Utilities.” George Mason University Mercatus Center. March 19, 2012. https://bit.ly/333bFFo 67Thierer, Adam. “The Perils of Classifying Social Media Platforms as Public Utilities.” George Mason University Mercatus Center. March 19, 2012. https://bit.ly/333bFFo
20
On the natural monopoly front, Zeynep Tufekci, an assistant professor at the University of North Carolina, Chapel Hill, argues that, "many such services are natural monopolies: Google, Ebay, Facebook, Amazon, all benefit greatly from network externalities which means that the more people on the service, the more useful it is for everyone." In particular, she worries about Facebook causing a "corporatization of social commons" and of the dangers of the "privatization of our publics.68 Here again, Thierer pushes back, pointing to the fact that traditional pillars of media regulation in regards to broadcast radio and television were scarcity and the supposed need for government allocation of the underlying limited resource of the broadcast spectrum. In contrast, social media services are not “physical resources with high fixed costs.” He concludes by contending that social media platforms do not possess the appropriate criteria or qualities that have been typically designated or associated with public utilities and common carriers.69
Based on the discussion above, while there is some validity to scholars like Tufecki’s concerns that private companies increasingly “own” digital public squares, the common carrier framework traditionally applied to a limited broadcast spectrum is not suitable to social media today. This is because, while many take issue with the fact that a few private companies command much of the information and communications space and battle valid claims of content and user discrimination, their services are free to access and they provide a wide variety of information sources for users to choose from. Because social media operates under different characteristics and lacks the same constraints as broadcast networks, solutions to tackle disinformation and hate speech inciting violence in the former should not seek inspiration from the latter. Broadcast and social media networks’ foundational services differ radically, and thus should not be regulated in a similar manner.
FREE MARKET FRAMEWORK The dominant free market framework is defined by the current status quo of “adhoc platform law” and whack-a-mole content takedown strategy, wherein platforms make their own rules in the absence of formal policy intervention. Under this framework, if business interests are consistent with minimizing harmful content, or protecting free speech, companies will pursue these goals as they align with market incentives, which nets out to sustaining a strong user base and keeping advertising partners happy. Now, platforms are facing a major public reckoning and pushback to their techno solutionist strategy to
68Tufekci, Zeynep. “Facebook: The Privatization of Our Privates and Life in the Company Town.” Technosociology. May 14, 2010. https://bit.ly/3gPA1dR 69Nat. Broad. Co. v. United States, 319 U.S. 190, 226-27 (1943); see also Red Lion Broad. Co. v. F.C.C., 395 U.S. 367, 375 (1969)
21
combat hate speech and disinformation, as it is revealed how the core ad monetization business model helps spread and amplify harmful content under the guise of free speech. Experts have long expressed concern that tech giants program their features to favor profit over societal benefit, especially around civic issues. The January 6 attack on the U.S. Capitol was organized in plain sight on social media platforms, and offers a wake-up call about their growing power and reach beyond the confines of cyberspace.70 Facebook and Twitter swiftly banned accounts and removed radicalizing content that spawned the violent mob, which culminated in the ban of former President Trump from the platforms, stating his use of social media to share misleading content and inflame millions of his followers. But many decried these actions as “too little, too late.”71 This move saw the migration of many, especially in conservative and alt-right circles and whose accounts had been suspended or removed from mainstream networks, to Parler, which bills itself as “the only neutral social media platform” for being largely unmoderated. The app was spawned because its founders claimed to be "exhausted with a lack of transparency in big tech, ideological suppression and privacy abuse” on Big Tech platforms.72 Harvard Co-Director of the Digital Platforms & Democracy Project at the Shorenstein Center on Media, Politics and Public Policy Dipayan Ghosh pondered if the Trump ban action indicated a turning point in how platforms handle potentially harmful content, and what it heralds in terms of their self-regulation.73 The “de-platforming” was decried by world leaders including German chancellor Angela Merkel as “problematic” as it called into question the “right to freedom of opinion [that] is of fundamental importance.” Ghosh argues that even those that felt the ban was appropriate acknowledge that tackling a single account in a politically divisive environment is not an adequate solution to address related deep- rooted issues that have plagued platforms’ tendency to promote and amplify extremist groups, hate speech inciting violence political propaganda and disinformation, and other controversial content to serve their bottom line.74 A March hearing titled “Disinformation Nation: Social Media’s Role in Promoting Extremism and Misinformation” demonstrated lawmakers are keenly aware of the ways in which social media platforms prioritize user engagement and monetization schemes that has enabled the proliferation of extreme and false material, and lack risk mitigation and prevention methods baked into current rules and practices.
70Frenkel, Sheera. “The storming of Capitol Hill was organized on social media.” New York Times. January 6, 2021. https://nyti.ms/3xAYnOk 71Culliford, Elizabeth; Menn, Joseph and Paul, Katie. “Analysis: Facebook and Twitter crackdown around Capitol siege is too little, too late.” Reuters. January 8, 2021. https://reut.rs/3vwl5Fr 72Hadavas, Chloe. “What’s the Deal With Parler?” Slate. July 3, 2020. https://bit.ly/3t4M0GI 73Ghosh, Dipayan. “Are We Entering a New Era of Social Media Regulation?” Harvard Business Review. January 14, 2021. https://bit.ly/3nGTHly 74Ghosh, Dipayan. “Are We Entering a New Era of Social Media Regulation?” Harvard Business Review. January 14, 2021. https://bit.ly/3nGTHly
22
Illinois Democrat Robin Kelly succinctly summarizes the problem inherent in the free market framework that has protected platforms’ business model from scrutiny and regulatory action: “The business model for your platforms is quite simple: keep users engaged. The more time people spend on social media, the more data harvested and targeted ads sold. To build that engagement, social media platforms amplify content that gets attention. That can be cat videos or vacation pictures, but too often it means content that’s incendiary, contains conspiracy theories or violence. Algorithms on the platforms can actively funnel users from the mainstream to the fringe, subjecting users to more extreme content, all to maintain user engagement. This is a fundamental flaw in your business model that mere warning labels on posts, temporary suspensions of some accounts, and even content moderation cannot address. And your companies’ insatiable desire to maintain user engagement will continue to give such content a safe haven if doing so improves your bottom line.”75 There are many examples that justify Kelly’s accusations. While Facebook relies on the U.S. State Department list of designated terrorist organizations, this does not include many white supremacist sites such as a group called “Alt-Reich Nation,” of which a member was recently charged with murdering a black college student in Maryland. The platform still hosts a number of hateful and conspiratorial groups, including white supremacist groups with hundreds of thousands of members, and regularly recommends users join them, according to a study76 published by the Anti-Defamation League.77 Twitter has also had its fair share of complaints about letting white nationalists use the platform even after being banned, and has said that it plans to conduct academic research on the subject.78 One investigation into Facebook’s failure to address such ills was illuminated in a Wall Street Journal article reporting that leadership ignored the findings of a 2018 internal report that emphasized the company was well aware that its recommendation engine stoked divisiveness and polarization. One slide from the presentation read: “Our algorithms exploit the human brain’s attraction to divisiveness. If left unchecked,” it warned, Facebook would feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”79 This finding demonstrates the company’s senior leadership’s attempts to absolve itself of responsibility and choice not to implement changes to its service to minimize the promotion of hate speech and bad actors, for fear it would have disproportionately affected conservative users and hurt engagement.80 75Edelman, Gilad. “Social Media CEOs Can’t Defend Their Business Model.” WIRED. March 25, 2021. https://bit.ly/3aQpsDk 76Hateful and Conspiratorial Groups on Facebook. Anti-Defamation League. August 3, 2020. https://bit.ly/3aN30Ld 77McEvoy, Jemima. “Study: Facebook Allows And Recommends White Supremacist, Anti-Semitic And QAnon Groups With Thousands Of Members.” Forbes. August 4, 2020. https://bit.ly/3aN37q7 78Newton, Casey. “How white supremacists evade Facebook bans.” The Verge. May 31, 2019. https://bit.ly/3u5QEWv 79Horwitz, Jeff and Seetharaman, Deepa. “Facebook Executives Shut Down Efforts to Make the Site Less Divisive.” Wall Street Journal. May 26, 2020. https://on.wsj.com/3e62I4k 80Seetharaman, Deepa. “Facebook Throws More Money at Wiping Out Hate Speech and Bad Actors.” Wall Street Journal. May 15, 2018. https://on.wsj.com/2QJ06k1
23
Facebook was also found to not have enforced its rule to stop “calls to arms” ahead of the Kenosha shooting, despite CEO Mark Zuckerberg stating it had removed a militia event where members discussed gathering in Kenosha, Wisconsin, to shoot and kill protesters.81 Last summer’s Stop Hate for Profit boycott campaign82 by leading advertisers was one market response to what some feel has been a limited and inadequate response by social media companies to proactively police misinformation and hate speech. Civil rights groups bolstered this effort by calling on large advertisers to stop Facebook ad campaigns during July, saying the social network isn’t doing enough to curtail racist and violent content on its platform.83 The campaign specifically focused its pressure on Facebook because of its scale and because advertisers feel it’s been less proactive than rivals Twitter and YouTube on policing misinformation and hate speech.84 Despite widespread support from major conglomerates backing the movement and taking action by pausing ads on the platform, and predictions it would cost the platform over $70 million, analysts affirm it had little impact on Facebook’s revenue.85 Such little substantive change demonstrates that the free market framework isn’t able to balance private financial incentives with calls from ad partners, users and civil society advocacy groups tying real-life negative consequences to hate fomented online. In March, Reporters Without Borders filed a lawsuit arguing that Facebook engaged in "deceptive commercial practices" by allowing disinformation and threats to flourish despite promising users that it will "exercise professional diligence" to create "a safe, secure and error-free environment." Their specific claims center on a lack of commitment to promises made in Facebook's terms and conditions, calling them deceitful and and contradicted by "the large-scale dissemination of hate speech and false information on its networks."86 Ahead of the 2020 U.S. election and in the wake of 2016 election meddling online, platforms were grappling with how to handle promoting political advertising, as there was fear their networks could have outsized power to change the balance of elections by targeting and influencing voter behavior.87 Twitter announced it would no longer serve political ads,88 and YouTube announced that it would
81Mac, Ryan and Silverman, Craig. “How Facebook Failed Kenosha.” Buzzfeed. September 3, 2020. https://bit.ly/3e4LlRh 82Stop Hate for Profit landing page. Accessed April 29, 2021. https://bit.ly/3t9IScJ 83Arbel, Tali. “Civil rights groups call for ‘pause’ on Facebook ads.” AP News. June 17, 2020. https://bit.ly/33gZ3ut 84Fischer, Sara. “Stop Hate for Profit social media boycott to focus its pressure on Facebook.” Axios. September 22, 2020. https://bit.ly/3xDpCaX 85Abril, Danielle. “Facebook ad boycott: ‘It’s not going to do anything to the company financially’.” Fortune. June 24, 2020. https://bit.ly/3eNTeK4 86Riley, Charles. “Facebook accused of failing to provide a 'safe' environment for users.” CNN. March 23, 2021. https://cnn.it/3eHNLVf 87Ryan-Mosley, Tate. “Why Facebook’s political-ad ban is taking on the wrong problem.” MIT Technology Review. September 6, 2020. https://bit.ly/3e7mv3x 88Feiner, Lauren. “Twitter bans political ads after Facebook refused to do so.” CNBC. October 30, 2019. https://cnb.cx/3ucwEkM
24
remove thousands of videos from its platform that promote white supremacy and other hateful material,89 Additionally, the company later decided to temporarily halt political ads, broadening its earlier restrictions, in order to limit confusion, misinformation and abuse of its services.90 Clearly, this framework presents the inconsistencies in how platforms, that have a difficult but undeniable responsibility as our digital public squares, have historically responded to charged events being organized on their site, and ahead of civic events like the 2020 U.S. election. In the wake of related PR crises and complaints about ideological biases and noxious content, Facebook announced the debut of its third party but internally funded Oversight Board as one free market framework mechanism,91 hiring a group of subject matter experts, ranging from lawyers to human rights experts to civil society members, to adjudicate specific posts to be taken down and holding a “final say” over how to handle controversial content such as hate speech.92 There is a big emphasis on its independence, though, as each member is paid a six-figure salary by the company,93 can only interpret Facebook’s existing rules, and CEO Mark Zuckerberg is under no legal obligation to abide by rulings. Crucially, nothing comes before it that has not already been taken down by Facebook, which leaves major gaps in Facebook’s stated commitment to improve public accountability measures.94 In January, Twitter announced the pilot phase of Birdwatch, a tool to crowdsource the content fact- checking process.95 Similar to the Wikipedia volunteer content management model – referenced as the most thorough and factual around – Twitter would harness its own community to shape its information landscape.96 It’s especially salient for posts that fall into grey areas that don’t violate rules but could still benefit from added context. Since false information can spread rapidly, Birdwatch will speed up a labelling process Twitter has struggled to scale. Preceding this effort, Twitter had guardrails like a civic integrity policy97 and undertook removing fake accounts, and labelling and reducing the visibility of tweets containing false or misleading information.
89Allam, Hannah. “YouTube Announces It Will Ban White Supremacist Content, Other Hateful Material.” NPR. June 5, 2019. https://n.pr/3gQHvNF 90Dwoskin, Elizabeth. “Facebook to temporarily halt political ads in U.S. after polls close Nov. 3, broadening earlier restrictions.” Washington Post. October 7, 2020. https://wapo.st/3h5Cf99 91Oversight Board landing page. Accessed April 29, 2021. https://www.oversightboard.com/ 92Levine, Alexandra and Overly, Steven. “Facebook announces first 20 picks for global oversight board.” POLITICO. May 6, 2020. https://politi.co/3xDbNtb 93Akhtar, Alana. “Facebook's Oversight Board members reportedly earn 6-figure salaries and only work 'about 15 hours a week'.” Business Insider. February 13, 2021. https://bit.ly/336Ewsa 94Ghosh, Dipayan. “Facebook’s Oversight Board Is Not Enough.” Harvard Business Review. October 16, 2019. https://bit.ly/3e7oFzQ 95Coleman, Keith. “Introducing Birdwatch, a community-based approach to misinformation.” Twitter Blog. January 25, 2021. https://bit.ly/3nCNtmx 96Collins, Ben and Zadrozny, Brandy. “Twitter launches 'Birdwatch,' a forum to combat misinformation.” CNBC. January 25, 2021. https://nbcnews.to/3e3DV0J 97Civic integrity policy. Twitter Help Center. January 2021. https://bit.ly/3e9EeHz
25
“Birdwatchers” add links to their own sources, label tweets like Twitter does, and rate each other’s notes so administrators can elevate or remove posts accordingly. Rallying users who know the platform best and have a vested interest in its functioning as a fact-based forum – with reputations on the line – makes sense. Twitter’s vision for this open-source ethos and collective consensus is that users would come away better informed. Ultimately, this experiment will determine whether Twitter users trust each other more than they trust the company to verify what they see on their newsfeeds. Birdwatch only has a thousand beta users, so it’s been hard to measure its impact at its current scope. VP of Product Keith Coleman acknowledges it could yield a mix of quality in its results, but hopes surfacing the best of the rating system and feedback loop can see through Twitter’s vision to foster a healthier forum with time.98 Both Facebook and Twitter say their goal is to build a new model for governing what appears on their platforms. Their new tools are radically different and have received equal amounts of flack and praise.99 Many critics see them for what they are: attempts to get ahead of, or altogether skirt, potential government regulation and hefty fines for monopolizing power over online discourse. The director of the Tow Center for Digital Journalism at Columbia University's Graduate School of Journalism and Guardian columnist Emily Bell affirms that “the social media giant is still trying to navigate controversial content, yet the problem remains the platform itself,” as the Board’s power remains illusory.100 Though the Board’s recent rulings overturned four out of five of Facebook’s content moderation decisions around cases involving hate speech, incitement to violence and other thorny topics101 – calling for posts to be restored – with one member stating that “this is the first time that Facebook has been overruled on a content decision by the independent judgement of the Oversight Board [with the ability to] provide a critical independent check on how Facebook moderates content,”102 its jurisdiction and authority are not expansive enough, as it can only review so many cases even if it has received more than 150,000 cases since October 2020.103 Additionally, interrogating the core business model falls outside its purview, and it seems logical that this should not be divorced from content moderation policies. Though the Board has the power to overrule CEO Mark Zuckerberg, nothing comes before it that has not already been taken down by Facebook, which left major gaps, but an update in April 2021 confirms users can now submit petitions
98Bond, Shannon. “Twitter's 'Birdwatch' Aims to Crowdsource Fight Against Misinformation.” NPR. February 10, 2021. https://n.pr/2PEt1VZ 99Pasternack, Alex. “Twitter wants your help fighting falsehoods. It’s risky, but it might just work.” WIRED. January 28, 2021. https://bit.ly/3u4ojj8 100Bell, Emily. “Facebook has beefed up its ‘oversight board’, but any new powers are illusory.” The Guardian. April 14, 2021. https://bit.ly/3xC7U7Q 101Oversight Board Decisions landing page. Accessed April 29, 2021. https://oversightboard.com/decision/ 102Mihalcik, Carrie and Wong, Queenie. “Facebook oversight board overturns 4 of 5 items in its first decisions.” CNET. January 29, 2021. https://cnet.co/3gZRDDA 103ibid
26
for content removals, presenting a departure from it previously only considering restoring removed posts.104 This is evidence that the body is still an evolving mechanism, and how future rulings will play out will dictate if Facebook sees it in its best interest to grant it more power over internal policy or widen its caseload. While creative, neither the Oversight Board nor Birdwatch constitute holistic solutions to the foremost challenges of our internet age, and shouldn’t be viewed as a substitute for or panacea to the lack of official regulation governing information and communication networks today. Nevertheless, market solutions can be easier to implement and enforce than policy in the short run, so assessing their goals and efficacy so far is instructive to the evolving content moderation landscape. To this point, the Oversight Board reviews and public comments around hate speech cases can provide some nuance to what is arguably difficult content and context for the company’s algorithms to always arbitrate. Member and constitutional law expert Jamal Greene affirms that the cases, which also involved content removed over rules on adult nudity, dangerous individuals and organizations, and violence and incitement, raises "important line-drawing questions.” Another important development following the standing up of the Board is that Facebook for the first time disclosed numbers on the prevalence of hate speech on the platform, saying that out of every 10,000 content views in the third quarter, 10 to 11 included hate speech.105 All in all, it’s clear platforms are still vying for ways to govern themselves. Their latest iterations to avoid public sector scrutiny simply shifts the responsibility of moderation to users and experts, and may be considered a diversion strategy from interrogating the source of disinformation and hate speech plaguing them. This is the fact that platforms remain the only entity that has any visibility into the algorithmic design “black boxes”106 that shape public discourse online. Public Knowledge Senior Vice President Harold Feld believes legislation that weighs evidence and balances interests is explicitly the job of Congress, not that of private companies. Feld takes issue with the current situation of pressuring companies to take “voluntary” action because it precludes forcing Congress to outline requirements companies must follow. Additionally, he notes that this opens the door to soft censorship and the promotion of political propaganda in the name of “responsible” corporate governance. He emphasizes that however difficult and controversial Congress may find it to develop content moderation requirements for digital platforms, perpetuating current efforts to force platforms to create their own policies without any
104De Chant, Tim. “Facebook users can now petition oversight board to remove content.” Ars Technica. April 13, 2021. https://bit.ly/3nAhenW 105Culliford, Elizabeth. “From hate speech to nudity, Facebook's oversight board picks its first cases.” Reuters. December 1, 2020. https://reut.rs/2PCDbpY 106Stern, Joanna. “Social-Media Algorithms Rule How We See the World. Good Luck Trying to Stop Them.” Wall Street Journal. January 17, 2021. https://on.wsj.com/3e4vwu6
27
formal guidance or oversight from lawmakers balanced with platform discretion is corrosive to democracy and undermines free speech values.107 Though Facebook publishes a quarterly Transparency Community Standards Enforcement report108 to track progress on its efforts to take action on content that violates its policies, it requires downloading a hefty document and parsing through a lot of data, which is likely a cumbersome task for most. Crucially, it does nothing to illuminate the interior workings of its code in decision making processes. One recent development on this front is Twitter’s late April 2021 announcement that it is making some strides towards sharing how race and politics shape its algorithm. The company will study the technology’s inherent biases in a new effort to try to understand how its machine learning tools can cause unintended consequences, and will share some of these insights publicly. Twitter ML Ethics leader Romman Chowdhury outlined the approach to prioritize what it calls “the pillars of "responsible ML," which include "taking responsibility for our algorithmic decisions, equity and fairness of outcomes, transparency about our decisions and how we arrived at them, and enabling agency and algorithmic choice.”109 More transparency, accountability and structural reassessment of the News Feed algorithm has been vigorously demanded of platforms – so much so that they have seen agitation by their own employees.110 Internal calls for change are an example of free market framework failure if the end result is platforms losing talent. But, if such pressure and backlash persists and results in tangible updates to correct content moderation practices that fall short, this would demonstrate the approach can be effective in removing harmful content to retain and attract talent needed to support the business. While the evidence above demonstrates that a free market mechanism like the third party Oversight Board can help improve the platform’s record and responsiveness on the accountability and transparency front, its jurisdiction remains too narrow and adjusting existing policies that do not go far enough in limiting disinformation and hate speech still falls outside of its purview. While the creation of the Oversight Board and Twitter’s Birdwatch signals a growing awareness and acknowledgment by companies that their content moderation practices are not effective, with the added plus of increasing transparency reporting, such mechanisms are not enough to placate lawmakers, academics and civil society members critical of the core business model and vying for a view into algorithmic creation and amplification.
107Stern, Joanna. “Social-Media Algorithms Rule How We See the World. Good Luck Trying to Stop Them.” Wall Street Journal. January 17, 2021. https://on.wsj.com/3e4vwu6 108Facebook Community Standards Enforcement Report landing page. Accessed April 29, 2021. https://bit.ly/3eD2YGU 109Kramer, Anna. “Twitter will share how race and politics shape its algorithms.” Protocol. April 14, 2021. https://bit.ly/2PCsRhF 110Frenkel, Shira; Isaac, Mike and Roose, Kevin. “Facebook Struggles to Balance Civility and Growth.” New York Times. November 24, 2020. https://nyti.ms/2QLVCsL
28
Negligence to respond to internal reporting that platform features increased polarization, amplified extremist and controversial content, and fell short in responding to flags of problematic Groups or posts in a timely manner are also top of mind. Demands – and potential regulation on the horizon – focused on unpacking how massive hordes of disinformation and hate speech proliferated onli