How Extreme Should the FBI and DHS Be in Removing Extremism from Gaming?

Wilson Sonsini Goodrich & Rosati

The United States Government Accountability Office (GAO) recently published a report that highlights the federal government’s increased interest in monitoring domestic violent extremist content on gaming and social media platforms. The purpose of the report is to provide information to other agencies and to encourage the development of actionable strategies to combat domestic violent extremist content. Forthcoming guidance, scheduled for release in June 2024, could result in companies navigating an increasingly complex moderation system with increased communication efforts with the Federal Bureau of Investigation (FBI) and Department of Homeland Security (DHS).

Background

The GAO report was a result of increased occurrences of high-profile, real-world attacks, seemingly spawned from the extremist’s consumption of certain types of content, including the communities of certain games. The GAO defines “domestic violent extremists” as United States-based actors who, without direction or inspiration from a foreign terrorist group or foreign power, seek to further political or social goals through unlawful acts of violence. However, defining domestic violent extremists is not straightforward, as different agencies have given different factual circumstances varying weight in determining whether an individual should be considered a domestic violent extremist.

The GAO report focused on: (1) why domestic violent extremists use social media and gaming platforms; (2) what companies are currently doing; and (3) how the FBI and DHS need to develop a cohesive information-sharing strategy.

Why Do Domestic Violent Extremists Use Social Media and Video Game Platforms?

The GAO found that by using social media and video game platforms, domestic violence extremists reached broader audiences, inserted extremist ideas into the mainstream conversation (thus tempering broader societal reaction to more extreme viewpoints), and radicalized, recruited, and mobilized a broader swath of users.

Current Industry Methods and Standards

Companies use different tools to moderate content. Some companies combine human moderation with machine learning tools, while others have a “trusted flagger” program or utilize design features to discourage violent content. Although companies are increasing their use of machine learning tools, it is often not enough to accurately sift through the millions of pieces of information that are uploaded to gaming and social media platforms each day. The GAO report references that even if these programs are successful for a company, they can impose a large logistical, financial, and administrative burden.

An additional issue that companies must grapple with is that there is no unified structure for what is considered “violent” content. For example, a posted video about a protest could be considered an allowable expression of free speech by one platform, whereas another platform could find that the protest was breaching civil peace and take the video down. Ultimately, despite companies’ moderation, the lack of information flow has caused the agencies’ broader mission of addressing violent online content to falter.

On the agencies’ side, the FBI and DHS hold meetings with companies to share information about what activities promote domestic violent extremism and have hotlines that allow companies to report instances of users promoting violent content or content that violates a company’s terms of service. However, despite the myriad efforts, there is not a unified action plan across the private and public sectors that leads to actual impact on real-world consequences. The report concludes by recommending that the FBI and DHS create strategies for these information-sharing efforts with gaming and social media companies.

Wilson Sonsini Insights

The federal government is taking an increased interest in working with social media and gaming companies to combat the rise of content that promotes domestic violence extremism. Two recent cases, Twitter, Inc., et al., v. Mehier Taamneh, et al., 598 U.S. 471 (2023), and the ongoing case of Diona Patterson et al., v. Meta Platforms, et al., 0805896/2023 (NYSCEF Doc No. 409), are illustrative of the shifting opinion that companies should have narrower protections under Section 230 for user content on their platforms.

In Taamneh, the question at issue was whether internet service providers were liable for “aiding and abetting” a foreign terrorist organization by recommending that content to users. The case was ultimately settled on a different matter before reaching the question of Section 230, but the Court’s grant of certiorari in the first place signals doubt around the extent of Section 230 protections. Furthermore, the Court’s silence in Taamneh on Section 230’s scope makes the risks dramatically larger for smaller platform companies. Risk-adverse platforms will be incentivized to censor content that could potentially bring liability rather than risk plaintiffs suing them, hindering free speech and indirectly censoring certain communities.

In Patterson, the question was whether the algorithms that drive social media apps are publishing platforms that Section 230 was designed to protect or a product designed for a specific function that would open a company up to claims of strict product liability. The case, while ongoing, has featured a win for the plaintiffs at the trial court level when the court rejected the argument that Section 230 barred a strict product liability claim outright. Although the plaintiffs face an uphill battle in proving that platforms should be treated like a product, as in Taamneh, the uncertainty of what exactly Section 230 protects opens a Pandora’s box of uncertainty that falls squarely on the shoulders of platforms.

The uncertainty raised by these two cases and the burdens placed on companies will be further exacerbated by any explicit or implied requirements created by the FBI and DHS.

An Implied Duty?

The GAO report is not binding law, and it acknowledges that federal agencies define goals as a best practice to effectively implement action. However, even if the GAO report does not draw any regulatory lines, it is creating an environment where companies may have to follow a more standard, government-imposed duty when creating or moderating their platforms. Even though there is no prescribed rulemaking as of this report, these suggestions could have the force of law, namely through negligence actions, if a consumer feels like a company’s non-compliance with the FBI’s and DHS’ recommended strategies would be a violation of the industry standard of care. For example, if it becomes commonplace for companies to have a “trusted flagger” program for sharing information (where extremism subject-matter experts use a direct line to report potentially dangerous content to the FBI or DHS), other companies may be required to follow suit, with no regard to the administrative, financial, or logistical costs. If these best practices were to be recognized or accepted as an actual obligatory duty, companies would be forced to comply to be viewed as acting reasonably.

First Amendment Considerations

While the report acknowledges that content promoting violent extremism may be constitutionally protected by the First Amendment, it does not acknowledge that the First Amendment’s core idea is to protect the neutrality of ideas. Rather, the report states that speech can lose its constitutional protection if the speech itself represents a “true threat.” These “true threats” are identified by five categories that include violent extremism motivated by: (1) race or ethnicity; (2) anti-government or anti-authority sentiment; (3) animal rights or environmental sentiment; (4) abortion-related issues; and (5) other domestic terrorism threats not otherwise defined. These classifications are so broad and amorphous that the agencies in charge of monitoring speech could use the categories to force companies to moderate any given speech or ideology in the name of a “national or departmental mission.” This broad set of categorizations, in conjunction with an implied duty, could lead to companies being conscripted into categorizing or monitoring content in ways that are at odds with the company’s own internal beliefs.

Conclusion

On their own, companies that want to become involved in directly engaging with the FBI and DHS can join non-governmental organizations like the FBI’s and DHS’ Global Internet Forum to Counter Terrorism or Tech Against Terrorism to represent their interests to the agencies. Also, companies should continue to implement best practices, such as content moderation and flagging violent content, while the FBI and DHS determine the next steps for effective information sharing. These incoming policies will likely require companies to invest additional capital in monitoring violent content.

While guidance is crafted and the government continues to scrutinize the tech industry, we expect there to be an ongoing discussion between gaming and social media platforms seeking to maintain self-regulation, and the government attempting to become more directly involved in the internet’s governance.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations.

© Wilson Sonsini Goodrich & Rosati | Attorney Advertising

Written by:

Wilson Sonsini Goodrich & Rosati
Contact
more
less

Wilson Sonsini Goodrich & Rosati on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide