Online harassment has been on the rise against historically marginalized groups as they engage in online spaces. While the internet was thought to bring an egalitarian space for all people to share their opinions and ideas, we have witnessed an increase of hate speech, threats of violence, targeted hacking and other methods used as a form of censorship to silence underrepresented voices.
The actions of harassers are both a) a reprisal tactic against those who speak out online and b) a means to control online discourse by discouraging others from continuing to participate equally. Of particular interest to OTF are the evolving methods deployed by repressive governments to silence independent voices and marginalize vulnerable populations.
Sockpuppetry, or manipulating online discussions with the intent to distort, has existed in online spaces for years. OTF previously supported efforts to detect these actions in online forums. Now, a variety of advanced techniques regularly employed by repressive governments allow them to manipulate and control online discourse rather than simply block the underlying medium. For instance, in Vietnam unknown actors successfully silenced those expressing dissent against the government by coordinating use of the “flag as inappropriate” function on Facebook, which, with more than 25 million users, is the dominant social platform in the country. This episode highlights how these types of tactics can not only deter freedom of expression but also silence it entirely.
What applications to OTF addressing online harassment should include/omit:
In general, technical approaches which mitigate the effectiveness of such censorship tactics or the ease with which they are carried out, or digital security support for those who have experienced such censorship in repressive environments fit within our remit.
Directly developing technology that directly address this challenge are within remit;
Conducting research or analysis to better understand the nature or scope of the challenge, in particular outputs that could advance the effectiveness of mitigating technologies, are within remit;
Trainings or digital security efforts with outputs that could advance the effectiveness of mitigating technologies are within remit;
More holistic digital security or organization capacity building support and/or efforts aimed at those in environments with non-repressive governments are not within remit.
Efforts where advocacy is the primary objective are not within remit – while recognizing a fine line between efforts that increase available information and awareness as a means to improve technology.