Facebook sued in France over hate speech

The lawsuit is based on the French consumer code, which protects consumers from deceptive commercial practices

Reporters Without Borders (Reporters sans frontières or RSF) has filed a lawsuit against Facebook in France for allegedly allowing proliferation of disinformation and hate speech on its platform.

The global media watchdog alleged in the court filing that Facebook had failed to provide a "safe" online environment for users.

"Using expert analyses, personal testimony and statements from former Facebook employees, RSF's lawsuit demonstrates that ... it (Facebook) allows disinformation and hate speech to flourish on its network - hatred in general and hatred against journalists - contrary to the claims made in its terms of service and through its ads," RSF said.

It added that the suit concerned the French and Irish divisions of Facebook and is based on the French consumer code, which protects consumers from deceptive commercial practices. Companies that are found in violation of the code can be imposed fines of up to 10 per cent of their annual turnover.

To support its claims, RSF cited some reports detailing threat and hate speech made against French journalists and the examples of disinformation circulating on the platform.

According to an AFP study, the social media giant lets Covid-19 conspiracy videos go largely unchecked, while non-profit First Draft has labelled Facebook as the "hub" of vaccine conspiracies in France.

RSF further said that Facebook had done little to stop threats and hate speech against Charlie Hebdo, the newspaper L'Union and the TV show Quotidien.

"We expect Facebook to effectively respect the commitments it has made to its consumers, rather than pretending to implement them without this being the case," the press freedom group said.

It added that it was considering filing similar suits in other countries.

In a statement to CNN, Facebook said that it has "zero tolerance for any harmful content on our platforms and we're investing heavily to tackle hate speech and misinformation."

Earlier this week, Facebook's vice president of integrity, Guy Rosen, announced that more than 1.3 billion fake accounts on Facebook were disabled between October and December 2020, and millions of posts containing misinformation surrounding the pandemic and vaccines were removed in recent months.

"Despite all of these efforts, there are some who believe that we have a financial interest in turning a blind eye to misinformation," Rosen said.

"The opposite is true."

On Thursday, Facebook CEO Mark Zuckerberg also appeared before US Congress in a virtual hearing held by the Energy and Commerce Committee and two Senate subcommittees to discuss proliferation of dis- and misinformation on Facebook, YouTube, and Twitter.

The hearing was announced in February, over a month after the riot in which Trump supporters stormed the Capitol Building while lawmakers were attempting to tally votes for the US presidential election.

In the opening session on Thursday, chair Mike Doyle asked Zukerberg (and CEOs of Google and Twitter) whether they felt they bore responsibility for Capital building violence and other extremist events in January.

Zuckerberg rejected responsibility for provoking the riots, claiming that hateful content on their platform made up only a small fraction of what users saw.

"We believe Congress should consider making platforms' intermediary liability protection for certain types of unlawful content conditional on companies' ability to meet best practice to combat the spread of this content," Zuckerberg said.

He also outlined various measures taken by the company in recent months to counter disinformation on their platform.