Last November, Pablo Breuer had been working from home in Baltimore when Elon Musk shared a report on X accusing him of designing a sweeping censorship regime to silence rightwing voices.

The article in question claimed Breuer was one of the instrumental figures in the birth of the “Censorship Industrial Complex” — a concerted effort across more than 100 government agencies and non-governmental organisations to stifle free speech in the US and the UK.

It was co-authored by Matt Taibbi and Michael Shellenberger, known for their part in the “Twitter Files”, in which X owner Musk invited journalists to trawl through internal documents about the platform’s past content moderation decisions.

Breuer, a cyber expert who became an executive director on Morgan Stanley’s US security team after 22 years in the US Navy, received a wave of online abuse following the article. He fiercely denies its characterisation of what he says are his volunteer efforts to fight online disinformation. 

“When you spend your career in the military defending the constitution, to have somebody accuse you of wilfully violating the constitutional rights of your fellow citizens . . . ghastly isn’t even the word,” he says.

But he is not alone. As the 2024 US presidential election approaches, dozens of academics, non-profit workers, volunteers and researchers who participate in initiatives tracking and countering online misinformation have been aggressively targeted by a loose coalition of free speech activists, Republican lawmakers and allies of Donald Trump, sometimes with their lives upturned as a result. 

Free speech campaigners say they are shining a much-needed light on what they see as the weaponisation of disinformation research. They accuse the US government, social media platforms and academics of over-reach and collusion that they say has trampled over the First Amendment rights of Americans to free speech.

Pablo Breuer
The journalists behind the so-called Twitter Files accused Pablo Breuer of being part of a concerted effort to suppress free speech in the US and UK, prompting a wave of abuse directed at the cyber expert © Andrew Mangum/FT

Their tactics include organising congressional hearings and instigating legal battles, including a major Supreme Court case that could shape the future of moderation and free speech in America.

Key figures include Jim Jordan, a Republican congressman who chairs the House judiciary committee, former senior Trump adviser Stephen Miller, former state department official Michael Benz — and the former president himself. He recently asked supporters in a campaign email whether “it’s time we shatter the leftwing censorship regime?”

On the other side, critics insist that fact-checking does not equate to partisan censorship, and dismiss the notion that there is a shady counter-disinformation cabal as a baseless conspiracy. They add that the attacks are driven partly by ideology, partly by the desire to rile up a political base and for some, by profit. 

“The whole thing is so surreal,” says one academic, speaking on condition of anonymity for fear of retribution. “They turn you into a character, they work to discredit you, and then they sell the story serialised [on Substack] for $9.99 a month.” 

Report co-author Taibbi, responding via email, says the anti-disinformation community “is already easily a multibillion-dollar industry, and you’re writing about ‘profit’ incentives for . . . me?”

The highly polarised debate is having a chilling effect on the field of disinformation research, according to more than a dozen academics and experts interviewed by the Financial Times.

Institutions such as Stanford and the University of Washington have been burdened with legal or security costs when attempting to protect their staff from harassment, some people say. Donors that previously backed disinformation non-profits have grown nervous, according to people familiar with their thinking.

Breuer says that following the allegations against him, he has increased security at his house and discussed an evacuation plan with his wife.

“When I was in the military I knew what I’d signed up for,” he says. “But that was just me, right? The kids weren’t involved, my spouse wasn’t there. I didn’t have to worry about their safety. And I had very clear rules of engagement.” 


The notion of disinformation wars exploded into the mainstream following Trump’s 2016 win, after US investigators found evidence that a Russian troll farm had sought to interfere in the vote. 

At the time, its efforts went largely unnoticed by Washington and the social media platforms, which later introduced rules prohibiting co-ordinated covert influence operations and misinformation about voting, and set up internal teams to hunt out covert propaganda perpetrators. 

It also led to the emergence of disinformation study as an academic subject and spawned dozens of start-up companies and non-profit organisations promising counter-disinformation or fact-checking services.

Breuer was among them; in 2019 he co-launched a “kill chain” for disinformation campaigns, referring to the process by which the military would break down the phases of an attack to prevent it advancing. He was working for the Department of Defense at the time but says the work was “all done in a volunteer capacity”. The blueprint has been accepted by the US government and the EU as a standard for sharing threat information, according to official documents, and is used by the World Health Organization to address disinformation.

Jake Angeli, a QAnon  conspiracy theorist’, protests outside an election counting station in Phoenix, Arizona, in 2020
Jake Angeli, a QAnon conspiracy theorist, protests outside an election counting station in Phoenix in 2020. Donald Trump has already suggested this year’s election could be rigged © Olivier Touron/AFP/Getty Images

Across academia and non-profits, initial efforts were focused on foreign disinformation — broadly defined as information that is at least partially false in content or context, shared with explicit intent to deceive and typically state-sponsored, often by Russia, China or Iran. But the focus soon turned to homegrown misinformation — false information that may be shared unwittingly.

“The conversation shifted from what foreign trolls were saying to what real people were saying — and of course that was going to be contentious,” says Darren Linvill, a professor at Clemson University focused on disinformation. 

During Trump’s presidency, his supporters often claimed that social media groups and academics were deliberately silencing conservative voices, even more so after he was kicked off social networks following his claims that the 2020 election was rigged and the riot at the Capitol. 

The Covid-19 pandemic was also a significant flashpoint. Initially, the platforms raced to get ahead of fears that the spread of certain content could result in more deaths. But in some cases, they were forced to backtrack; in mid-2021 Facebook reversed its ban on claims that Covid-19 was man-made or manufactured. 

Gordon Pennycook, psychology professor at Cornell University, says there is plenty of research suggesting that rightwing social media users tend to spread misinformation more than their counterparts on the left.

But he and others note that most university academics lean liberal. “There is a genuine issue where academics tend to be on one side of the political aisle and there tends to be more misinformation on [the other] side,” he says. “It gets sucked into the culture wars.”


In April 2021, Sir Nick Clegg was in trouble. Meta’s head of global affairs and former UK deputy prime minister had just come off an hour-long call with a furious Andy Slavitt, then a senior adviser to Joe Biden.

“[He] was outraged — not too strong of a word to describe his reaction — that we did not remove this post,” Clegg wrote in an email to his team after the call, referencing a request from the official to take down a Leonardo DiCaprio meme with an anti-vaccine message. 

Clegg had argued that doing so would represent “a significant incursion into traditional boundaries of free expression” but nevertheless told staffers to look at more data and consider next steps.

This interaction is one of many unearthed by a congressional investigation, led by Republican Congressman Jordan, which accuses the Biden administration of coercing Big Tech to muzzle Americans.

Anthony Fauci, former chief medical adviser to the president, is questioned by Jim Jordan, a Republican Congressman, in Washington this month
Anthony Fauci, former chief medical adviser to the president, is questioned by Republican congressman Jim Jordan in Washington this month © Somodevilla/Getty Images

A spokesperson for the judiciary committee said the documents obtained were “concrete evidence that social media giants caved and changed their policies to stifle Americans’ speech because they needed to maintain a good relationship with the Biden White House to garner favourable policy decisions”. Meta declined to comment. Slavitt and other White House officials did not respond to requests for comment.

Making similar arguments, one of the free speech lobby’s most consequential initiatives comes to a head in the coming weeks, when the US Supreme Court is expected to rule in a case brought by a Republican attorney-general who has since become a senator. It revolves around whether federal officials colluded to coerce social media platforms to suppress free speech on topics including coronavirus and election interference. 

“Whatever the cost of free speech is in terms of allowing people to spread false information, the cost of empowering the government to allow what is and isn’t true and what can and can’t be said is much higher,” says Aaron Terr, director of public advocacy for the Foundation for Individual Rights and Expression, a non-profit that aims to protect free speech.

Several academics acknowledge that there may have been instances where government workers overstepped the mark in their takedown requests, but say such episodes did not constitute collusion or the work of a deep-state cabal.

Others say the Supreme Court case is based on cherry-picked statements taken out of context. The Tech Policy Press, a non-profit group, said bits of communications between various entities “appear to be stitched together — nay, manufactured — more to support a culture war conspiracy theory than to create a credible factual record”.

As a result of the case, Meta told reporters in December that the government had frozen its information sharing with the platform. Last month it said some “limited” information sharing had resumed but that it was unclear how this would continue.

Meta has also rolled back a ban on political adverts that claim the 2020 election was stolen and shrunk the team that tackles disinformation.

Chris Krebs, former director of the US Cybersecurity and Infrastructure Security Agency, describes the shifting landscape as “concerning” given that the “Russian influence operations” that attempted to meddle in the 2020 presidential election continue to function.

Bret Schafer, a propaganda expert at the advocacy group Alliance for Securing Democracy, points out that in 2016 “nobody was communicating particularly well, so those silos led to us missing things”.

“And now, for different reasons, those silos have been built back up again.”


Researchers and academics have also been swept up in the Supreme Court case and broad congressional investigation into whether Big Tech colluded with the Biden administration.

They are largely being targeted on the premise that any links to or financial support from government implies they are an extension of officialdom.

“The worst-case scenario would be if they are, in fact, not independent researchers, but really delegates and the government has deputised them to do its dirty work,” says Nadine Strossen, a legal scholar and civil liberties activist.

As part of his inquiry, Jordan has held multiple hearings and fired off dozens of information requests and subpoenas to researchers. In particular, he has taken aim at the Election Integrity Partnership, a university-led project to combat viral election misinformation, which is made up of Stanford University, the University of Washington, the Atlantic Council think-tank and social media intelligence firm Graphika.

The judiciary committee says those involved with EIP “played a unique role in the censorship industrial complex given their extensive, direct contacts with federal government agencies”.

Lawsuits and further public records requests have followed. The founder of website Gateway Pundit, Jim Hoft, is suing researchers from the EIP collaboration arguing it was a “mass-surveillance and mass-censorship program”. Hoft is represented by lawyers at America Legal First, a non-profit headed by former Trump official Miller.

The academic and non-profit groups, as well as individual researchers, have also attracted allegations and moral outrage from a network of rightwing political influencers with millions of online followers.

In response, some researchers have publicly stated that they are not puppets of the Biden administration or part of a liberal plot to steal elections. Stanford, for example, said in March that “universities do not become government actors by engaging with the government about their research”.

Footage of Stephen Miller, a former senior Trump adviser, is played during a committee hearing into the January 6 attack on the US Capitol
Footage of Stephen Miller, head of the non-profit America Legal First and a former senior Trump adviser, is played during a committee hearing into the January 6 2021 attack on the US Capitol © Al Drago/Bloomberg

Others have remained silent in public, but privately say they have been burdened by onerous information requests while grant funding is becoming harder to secure. 

“I’m just constantly shocked at how weird, how upside down my life is now, where I’m in a position where I would even be on letters sent by senators,” says Clemson University professor Linvill. He has been asked to share all his emails with Jordan’s office and has received multiple freedom of information requests from the Twitter Files journalists. “I’m on a first-name basis with our general counsel office now,” he quips.

Schafer adds that “there’s a risk of even the most innocuous of emails becoming the central character of a conspiracy theory”.

Breuer says being thrust into the public eye put him at odds with the leadership at Morgan Stanley — the bank that advised Musk on his acquisition of Twitter and led the syndicate that helped finance it. He says he was recently given a negative evaluation and told by a managing director that the incident had put the firm “in a bad light”.

Morgan Stanley said it could not comment on Breuer’s job performance but that his outside activities “were at no point taken into consideration when assessing his job performance”.


Many fear matters will worsen as the country heads to the polls in November for an election that Trump has already warned could be “rigged”.

“A well-functioning democracy would be able to answer questions about its election with concrete facts,” says Alex Abdo, litigation director of the Knight First Amendment Institute at Columbia University.

Elon Musk
After Elon Musk took over Twitter, now called X, he invited journalists to examine internal documents about the platform’s past content moderation decisions © Gonzalo Fuentes/Reuters

“But I’m worried that we won’t have those answers because there is a concerted effort by Congress and others to make sure we don’t have answers to the facts and the research doesn’t get done.”

A second Trump presidency could herald a further escalation. In 2022, the former president said that if re-elected he would ban the US government from labelling domestic speech as mis- or disinformation and fire any federal bureaucrat he believes has already engaged in alleged domestic censorship “directly or indirectly”.

Have your say

Joe Biden vs Donald Trump: tell us how the 2024 US election will affect you

He also said he would curb funds to any universities found to have “engaged in censorship activities or election interferences in the past, such as flagging social media content for removal or blacklisting” for at least five years. 

Some are urging solidarity and defiance in the face of such threats. In a September report, the non-profit Center for Democracy and Technology called for groups to “create shared resources and practices for researchers under attack”. 

Kate Starbird, associate professor at the University of Washington, says her department, the Center for an Informed Public, would continue to flag “harmful election rumours” in 2024. 

“The last year has been challenging, but it’s a blip in the larger context of our work.”

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments

Comments have not been enabled for this article.