Reaction case studies

Table of contents

  1. Legislation
    1. The French law against information manipulation
      1. Social media
      2. Foreign controlled publishers
      3. Media education
      4. Jurisprudence
    2. EU Code of Practice on Disinformation
      1. Signatories
      2. Scope
      3. Goal
      4. Defining disinformation
      5. Commitments
  2. Correct the record


The French law against information manipulation

In France, a law against information manipulation and the intentional spread of disinformation was officially adopted on November 20, 2018, nine months after its proposal and the implementing decree was published on April 12, 2019.

It focuses on massive and rapid spreads of disinformation through social media in electoral contexts, and through foreign state-owned media outlets at all times.

Social media

The law compels social media operators to be more transparent by adding name and cost of sponsored political and issue-based content. The largest platforms must have a legal representative in France (article 13) and publish the distribution of organic vs algorithmically recommended views for all political and issue-based content (article 14).

For the three months before an election, content that is “inexact or misleading” and that has an “ability to alter the sincerity of the vote” can be ordered to be taken down by a judge. In order to be targeted, the content also has to be spread “deliberately, artificially or automatically” and massively”. Both ruling and appeal have to take at most 48 hours (article 1, creating article L163-2 of the Electoral code)

Beyond election times, platforms ought to cooperate, as well as build and implement open measures to fight disinformation. They must give a yearly report of their actions to the French Superior Council of the Audiovisual (CSA) (article 11). This reminds of the EU Code of practice.

Foreign controlled publishers

The CSA has been given authority to hinder or interrupt the broadcasting of foreign state-owned (or state-influenced) broadcasting services when there is “a grave risk” to human dignity, freedom, property, pluralism, public order, defence needs, fundamental national interests (article 5).

Media education

The law adds to the duties of the national educational system to train pupils in media literacy and recognising information manipulation (articles 16 to 19).


On May 10, 2019 two French elected representatives asked for the takedown of a tweet by the minister of Interior claiming that a hospital in Paris had been “attacked” during a demonstration on May 1. It was indeed proven shortly after the tweet was sent that these people were only seeking refuge.

Based on the 2018 law against information manipulation, the content having been published within 3 months of an election (the EU parliamentary elections), they brought the case to court on May 10 and obtained a hearing on the 16th. The ruling was given on May 17 and rejected the case.

According to the ruling, the case was out of the scope of the law, as not all criteria were met:

  • The content was not deemed “false or misleading” as it “relied on actual events” (an intrusion in the hospital), as demonstrated by press articles, and deemed only an “exaggeration”.
  • It was not deemed “able to alter the sincerity of the next election” as it was “immediately challenged”, “allowing citizens to make their mind without manifest risks of manipulation”.
  • It was not demonstrated by the plaintiffs that the spread was “deliberate, massive, artificial or automated”.

As a side note, Twitter France was granted not to be cited, as it claimed that it only monetized the information network of Twitter International, who thus was actually responsible for data use and processing. Twitter International was not cited either but made a “voluntary intervention” in the case, adding details to the debate.

EU Code of Practice on Disinformation

The Code of Practice was impulsed by the European Commission. Signed in September 2018 and entered into force one month later, the Code was developed to achieve objectives already laid out in April of the same year regarding the spread of disinformation online, especially on social media platforms ahead of the European elections in May 2019.

Commitment to and implementation of the code work on a voluntary basis with self-regulatory standards.


Singatories include platforms such as Facebook, Twitter, YouTube, but also Google & Mozilla. Beside representatives of online platforms and prominent social networks are advertising industry actors.


The application of the Code of Practices is limited for each signatory to services provided within the European Economic Area (EEA).


Overall, signatories must contribute to solutions to challenges raised by disinformation. As explained in the text itself, “the purpose of this Code is to identify the actions that Signatories could put in place in order to address the challenges related to “Disinformation””.

Among these actions are efforts towards more transparency, safeguards, scrutiny, a reduced visibility of fake information, an improvement of the findability of trustworthy content, among others.

Defining disinformation

The Code offers a rather complete definition of disinformation, on which all signatories agree. Excluding “misleading advertising, reporting errors, satire and parody, or clearly identified partisan news and commentary”, disinformation is defined as “verifiably false or misleading information” that is “created, presented and disseminated for economic gain or to intentionally deceive the public” ; and that “may cause public harm” and threatens “democratic political and policymaking processes as well as public goods such as the protection of EU citizens’ health, the environment or security”.


The Code lists a variety of detailed measures to which signatories commit, according to their technical capabilities, the services they provide, their liability regime and “the role they play in the creation and dissemination of the content at stake”, among other criteria.

Said measures include the scrutiny of advertising placements (for instance, the deployment of policies and processes disrupting “advertising and monetization incentives for relevant behaviours”) ; the clear and public disclosure of political advertising and issue-based advertising (in order to distinguish them from editorial content like news ) ; as well as the measuring and monitoring of the effectiveness of the Code.

Moreover, signatories commit to make efforts to empower consumers and the research community. Most also engage themselves to putting in place and enforce “clear policies regarding identity and the misuse of automated bots on their services”.

To help signatories do so, best practices are detailed in the annex of the Code. However, considering the diverse nature of the signatories’ operations, purposes, technologies and audiences, the Code is open to various other approaches “accomplishing the spirit of [its] provisions”.

Correct the record

On March 12, 2019, international cyberactivism NGO Avaaz (“voice” in sanskrit) published a report called “Yellow Vests flooded by Fake News - Over 100M views if disinformation on Facebook” in which it called on Facebook to “Correct the Record” ahead of EU Elections. Alongside a study on disinformation - especially Russian campaigns - focused on the Yellow Vests movement in France, Avaaz details its proposal for an innovative solution.

Based on cooperation between factcheckers and platforms (especially Facebook), the initiative works following a five-step process. First, content viewed by a significant number of people and deemed false or misleading after verification by independent fact-checkers would “activate” an obligation for platforms to correct the record. Then, platforms woud have to provide a misinformation report mechanism that is easy to access for users, as well as access for fact-checkers to content that has been viewed by a significant number of people. Reported content should then be fact-checked within 24 hours by “independent, third-party, verified factcheckers” working with the platforms. Platforms are also asked to display their “most visible notification standard” to notify all users exposed to verified disinformation. Lastly, “each user exposed to disinformation should receive a correction that is of at least equal prominence to the original content and that follows best practices”, specifically adapted to the user’s profile.

Ultimately, all platforms users exposed to disinformation would receive independent third party corrections. One key-point is the avoidance of the repetition of disinformation.

Through this initiative, Avaaz’s goal is to “restore the public’s trust” and “ensure the integrity” of the upcoming European elections.

More information as to the usefulness and effectiveness of correction in hindering the effects of disinformation can be found here (p.14/28), here and here.

Creative Commons Attribution Share alike license ✎ Improve this content How to contribute