Categories: US

Gaming platforms working with DHS to crack down on ‘extremists’

Gaming platforms are working with the Department of Homeland Security Intelligence & Assessment (I&A) office to crack down on “domestic violent extremists.”

A recent report from the Government Accountability Office (GAO) explored how the DHS, FBI, and social media and gaming platforms are working in tandem to suppress what the federal government has classified as “domestic violent extremism” (DVE), a term synonymous with domestic terrorism. 

In recent years federal law enforcement agencies have used terms such as “domestic violent extremist,” “militia violent extremist,” or “domestic terrorist” to refer to conservatives, particularly supporters of President Donald Trump. According to former FBI Special Agent Garret O’Boyle, the FBI has classified every January 6th case — even those only convicted of trespassing or disorderly conduct — as domestic terrorism.

Another FBI agent who spoke on condition of anonymity explained to The Washington Times, “If they have a Gadsden flag and they own guns and they are mean at school board meetings, that’s probably a domestic terrorist.” 

The official definition of a domestic violent extremist according to the FBI and DHS is “an individual based and operating primarily within the U.S. or its territories without direction or inspiration from a foreign terrorist group or foreign power, who seeks to further political or social goals through unlawful acts of force or violence dangerous to human life.”

An example of such violence, says the GAO report, was the “attack on the Capitol” on January 6th, 2021. 

Federal law enforcement agencies have designated five types of violent extremism, including  “anti-government or anti-authority sentiment” and “other domestic terrorism threats not otherwise defined” such as “conspiracy theories.”

Importantly, the report notes that there is no federal statute criminalizing domestic violent extremism. Instead, law enforcement agencies use lawfare at their disposal to prosecute “extremists,” with charges of hate speech, for example.

The FBI and I&A work closely with social media companies and gaming platforms such as Roblox to censor such “extremism.” Each year the FBI holds a meeting with “private sector partners” such as social media platforms to brief them on the “threat landscape,” which describes the type of content that should be banned. Last year the FBI worked to increase engagement with gaming companies at this annual meeting.

Via local field offices, FBI operatives keep tabs on these companies throughout the year. In this way the agency can ”learn how companies operate, the type of behavior and content companies see on their platforms, and the extent to which companies report information as tips,” according to the report.

When determining which content is “extremist” and constitutes a “threat,” the FBI and DHS appear to rely heavily on reports from universities and organizations such as the Anti-Defamation League (ADL) and the Southern Poverty Law Center. Gaming and social media companies then employ sophisticated machine-learning algorithms to police users’ conversations and posts for such content.

The role played by organizations like the ADL is a crucial one. Their “research” provides the basis for suppressing content that is neither violent nor violates the platform’s content policies, but is still considered “extremist.” For example, ancient Greek, Roman, and Norse imagery are reportedly used by White supremacists “to create an aesthetic that supports their narratives” and should be flagged.

Gaming giant Activision employs a censorship tool called ToxMod, which was programmed to suppress content considered extremist by the ADL even if it is non-violent.

“Using research from groups like ADL, studies like the one conducted by NYU, current thought leadership, and conversations with folks in the gaming industry, we’ve developed the category to identify signals that have a high correlation with extremist movements, even if the language itself isn’t violent,” explains ToxMod creator Modulate on its website. 

For its report, the GAO consulted with several ADL operatives and academics as well as a researcher from the Southern Poverty Law Center. Some of these experts recommended to the GAO that the government should “assist states in developing educational programs to teach parents how their children could be radicalized online, framing the issue as one of child safety.”

Yudi Sherman

View Comments

Recent Posts

Israelis initiate movement promoting Jewish settlement in southern Lebanon

As Israel's north absorbs fatal rocket attacks by the Hezbollah Islamic organization, a movement of…

8 months ago

Bridges, airplanes, and diversity exterminating competence: Analysis

The ship that collided this week with Baltimore's Key Bridge, the MV Dali container vessel,…

8 months ago

Irish hate legislation fails after intense popular backlash

In a dramatic turn of events, Sinn Féin announced Monday its opposition to the Irish…

8 months ago

Pressure mounts on British health secretary to come clean on vaccine fallout

A bipartisan group of British lawmakers is pressuring Health Secretary Victoria Atkins to prove government…

8 months ago

Comedian slams woke assault on comedy

TV host and comedian Howie Mandel last week joined a chorus of other comedy artists…

8 months ago

TikTok censors anti-birth control content at media’s request

TikTok is removing videos that criticize hormonal birth control for being “misinformation” at the request…

8 months ago