For almost a quarter of a century, curators of online content have avoided being treated as “publishers” or “speakers” of statements made by third-party users thanks to Section 230(c)(1) of the Communications Decency Act (“CDA”) (codified at 47 U.S. Code § 230), which states that “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Section 230 was initially passed with the goal of encouraging platforms of online content to self-regulate by granting immunity for blocking offensive material, as well as to encourage the growth of online forums by immunizing the platforms against liability for third-party speech. Since its passage, this seemingly innocuous 26-word provision has had the effect of providing broad protection to companies that host online platforms from responsibility for inflammatory, offensive, or illegal content posted to their platforms. It gives market-leading online platforms the ability to choose whether or not to regulate such content on their own without the risk of being held to the same standards as traditional publishers. In remarks to the National Association of Attorneys General, Attorney General William Barr recently announced that DOJ has made “the review of market-leading online platforms a top priority,” signaling that state attorneys general should join DOJ in the review of Section 230 and its scope, and ultimately come out with a unified approach.
Without Section 230, internet service providers would have to function like newspaper editors rather than bulletin boards. According to the Electronic Frontier Foundation, “[t]hough there are important exceptions for certain criminal and intellectual property-based claims, CDA 230 creates a broad protection that has allowed innovation and free speech online to flourish.” Due largely to the essential nature of this protection to the provision of modern internet-based computer services, Jeff Kosseff, a professor at the Center for Cyber Security Studies at the United States Naval Academy, has called Section 230 “The Twenty-Six Words That Created the Internet.”
But those “twenty-six words” are under threat. In recent months, the special protection offered by Section 230 has been under increasing public scrutiny, especially from DOJ. On December 10, 2019, in remarks to the National Association of Attorneys General, AG Barr suggested that “[g]ranting broad immunity to platforms that take no efforts to mitigate unlawful behavior or, worse, that purposefully blind themselves” was “not consistent with [the] purpose” of Section 230, which he described as intended to nurture the internet at a time when it was “relatively new” and “to protect the ‘good Samaritan’ interactive computer service that takes affirmative steps to police its own platform for unlawful or harmful content.” AG Barr noted a tension between allowing “platforms to absolve themselves completely of responsibility for policing their platforms, while blocking or removing third-party speech—including political speech—selectively, and with impunity.”
Other politicians have echoed this sentiment. Earlier this year, Senator Josh Hawley of Missouri introduced legislation titled “Ending Support for Internet Censorship Act,” which, for companies meeting certain size thresholds, would condition Section 230 immunity on first obtaining a certification from the FTC that it has shown by clear and convincing evidence that it does not moderate information provided by other information content providers in a politically biased manner. The stated purpose is to “encourage providers of interactive computer services to provide content moderation that is politically neutral.” Other politicians have called for drastic changes to regulations of big tech. Massachusetts Senator and Democratic presidential candidate Elizabeth Warren, among others, has called for a break-up of large tech companies and proposed forcing a rollback of acquisitions by certain corporations.
Further evidence of DOJ’s ongoing interest in Section 230 protection is the February 19th workshop it hosted entitled “Section 230: Nurturing Innovation or Fostering Unaccountability?” On the agenda was “the evolution of Section 230 from its original purpose in granting limited immunity to Internet companies, its impact on the American people, and whether improvements to the law should be made.”
The protection granted by Section 230 can preempt state criminal laws, something the National Association for Attorneys General has proposed changing since at least 2013 in the interest of pursuing human trafficking networks that operate via some online platforms. One potential problem with getting rid of Section 230 protection, however, as tech advocates have noted, is that platform operators would then be exposed to liability under a patchwork of different state standards that could criminalize currently legal practices to varying degrees. Such an outcome, they argue, could have a chilling effect. For example, many states have statutes that criminalize defamation. If a platform fears being held criminally liable for all of its users’ content, this could radically change the platform’s willingness to host content in the first place.
Others argue that Section 230 protection need not disappear in order for federal regulators to pursue bad actors for offenses like human trafficking, as was seen with DOJ’s prosecution of the executives behind the website “Backpage” for facilitating prostitution and human trafficking.
What This Means For You
The repeal or diminishment of Section 230 would have wide-reaching ramifications for the internet.
- Those who host content online posted by third-parties not under their control (i.e., those who are “interactive computer service” providers) should pay close attention, as changes to this arena could cause massive shifts in potential liability. For example, if Section 230 were reformed to no longer preempt state criminal laws, companies would have to navigate state laws criminalizing defamation.
- Changes to Section 230 may indirectly affect users of services provided by major companies such as Facebook and Google. These companies may limit investment in new platforms and acquisitions, adopt more rigorous censorship, or impose limits on third parties who use these networks as sources of revenue. Certain companies have already reflected their willingness to moderate themselves and their business models as a result of pressure from private market forces, such as advertisers. It is not a stretch to imagine that increased pressure from regulators would have a similar effect.
- Even companies that do not host third-party content should remain vigilant as it is clear that the U.S. government’s appetite to regulate large technology companies is growing. Companies would do well to get out in front of regulation, as Amazon has done with respect to facial recognition technology (as discuss in a previous post).