Reaction practices

Table of contents

  1. For public institutions
    1. Swedish Civil Contingencies Agency’s Communicators handbook
  2. Criterion: False or misleading
    1. Denial
    2. Chatbots
  3. Criterion: Intent to harm
    1. Content takedown
  4. General
    1. Elves

For public institutions

Swedish Civil Contingencies Agency’s Communicators handbook

The handbook for communicators is a 40-pager focused on helping communicators identify and react against information influence campaigns. It also contains thorough descriptions of information spread online and inauthentic behaviours.

It is not focused on the Swedish context. Most of the limitations to reaction are specific to public actors.

Criterion: False or misleading


Quickly denying one’s involvement or responsibility in an event or domain can be a way for an actor to counter the effects of the spread of disinformation. Given that in interference occurrences, disinformation campaigns often aim at weakening or destabilizing powerful and / or state instances, a direct denial reaction can be appropriate.

For instance, in March 2019, several Algerian media outlets (including and ) spread the fake news that France was suspending visa delivery for Algerian natives. The French Consulate in Algeria publicly reacted the same day.

It is difficult to assess the efficiency of such denials but direct reactions can help discredit disinformation-spreading outlets, instances and individuals as well as facilitate the work and action of fact-checkers.


In Taiwan, a developer created Aunt Meiyu, a bot that can be included in private chats and that will flag fake news and rumors as they are shared within family and friends groups. Over a hundred thousand users have set up this bot, with a block rate of circa 3%.

Criterion: Intent to harm

Content takedown

Taking content down from the web is rarely the best option available.

First of all, there are societal risks of chilling effects that decrease the likelihood of dissent being expressed, which then reinforces the legitimacy of disinformation as it has more room to build a compelling narrative.

Secondly, the Streisand effect is very effective in the reinforcing of the validity and visibility of disinformation. Taking down content allows for an active victimization strategy for sources of disinformation.

Finally, from a pragmatic point of view, it is complex and cumbersome to take content down from operators that are within jurisdictions that have a different understanding of what is acceptable and legitimate speech.

However, there are cases where disinformation goes beyond being misleading, and can be qualified as unacceptable (hate speech, threats…). Two options exist: in the first case, said speech is restricted by the social network operator’s terms of service. In that case, a direct flagging with the operator should be sufficient to have it removed. Alternatively, if that type of speech is illegal in your jurisdiction, you will have to identify the contact point that can act quickly and notify the operator through its dedicated channel.



Elves are civilian volunteers fighting against internet trolls and disinformation. They can operate individually as well as form an organised social media community. They act both proactively or reactively.

The term was born in the Baltic states (Lithuania, Latvia and Estonia), which later spread south in eastern and later western Europe. Hunting and fighting trolls, they usually work in self-organized networks to counter disinformation. Their activities include debunking, the spreading of fact-checks, and the finding of fake accounts - among others.

It is difficult to assess the level of influence exerted by their actions. However, especially in the Baltic states and more particularly Lithuania, elves have gained significant media attention in their digital warfare against pro-Kremlin trolls.

Creative Commons Attribution Share alike license ✎ Improve this content How to contribute