User empowerment is the best tool to curb online harassment
Online harassment is both a privacy and a security concern. We all know the story of how someone (typically a woman, studies say) states their opinion online and is then harassed to the point of leaving the service (or worse). Using the infamous “with an opinion” hook, we can frame a user story that affects more than 50% of the population:
User story: I am a marginalized person with an opinion. I want to intercept online harassment, so that I can communicate safely with friends and strangers.
The truth is that a motivated mob can target anyone, marginalized or not. We would all benefit from effective anti-harassment tools.
Don’t rely on the operator
Many current and proposed solutions to stop or curb harassment rely on one or more of these methods:
- Human content moderation. Typically volunteer or low-paid, and subject to burnout. A moderation team simply does not scale, and cannot moderate private messages (we define “private” as “end-to-end encrypted”).
- Server-side tracking. Error-prone “algorithms”, with little or no transparency, regularly make mistakes. And once more, they cannot apply to private messages.
- Shoot-first takedown laws that skip the deliberative process and are frequently abused.
- Corporate censorship, or any of the above distorted by bottom line.
It is tempting to rely on a server-side solution, whether that means the machine itself or humans working on your behalf. This can work on tiny scales if you have a trusted friend with both technical and legal know-how, but in all other cases the issues are compounded. To mashup two misunderstood quotes:
You solved a harassment problem by ceding control to the service? Now you have two problems.
Empower the user
We suggest that user empowerment via client-side features is a more robust and safer approach. Potential design patterns include:
1. Client-side heuristics
Server-side solutions necessarily put power in the hands of a developer or sysadmin. By contrast, client-side heuristics put power in the hands of the user, including the power to turn them off. Privacy Badger is a great example of this in practice:
- Fresh installations use rules generated by offline training.
- Additional rules based on behavior-based heuristics.
- Additional customization for experienced users.
- No ads, no calling home, no tracking.
- Turn it off, for example if you are researching trackers.
Moving forward we aim to enhance all Librem One clients with badger-like functionality. We believe that the majority of cases won’t require machine learning, and could be handled with simple heuristics:
2. Safety mode
We can classify online correspondents into three groups:
- Trusted contacts. People we talk to regularly, and trust.
- Strangers. People we don’t know well, or don’t know at all.
- Bad actors. People we don’t want to interact with, possibly based on the advice of a trusted contact.
Typically, we want to communicate with strangers online, so this should be possible by default. But if we are being actively harassed, we can assume that further messages from strangers are unsafe, and switch our account to “safety mode”–rejecting messages, invites and other interactions from strangers. We can rely on our trusted contacts for help and support, including passing on well-wishes from strangers.
At-risk individuals might choose to start their account in safety mode.
Trusted caretakers might maintain lists of bad actors, but trusting a caretaker should require very careful consideration: What is their governance model? What is their appeals process? Do they leak information about list recipients?
3. Crowd-sourced tagging for public content
In the specific case of public posts, we believe that public crowd-sourced tagging (aka, folksonomy) is a sustainable and fair replacement for human moderation, caretaker-lists and takedowns.
This approach takes moderation power out of the hands of a few sysadmins and corporate moderation teams, and grants it to all users equally. Users are free to decide which user-moderator they trust, and filter based on their tags–or skip moderation entirely.
: I pity the fool who can't butter their #toast! #onlydirectionisup : #hatespeech : #butterpolitics : @ Shut up! My grandparents fought to butter side #down! : #thoughtleader : #butterpolitics : @ @ Well actually, you're ignoring the #margarine argument. You're such #lipidariantoastbros : #butterpolitics : @ @ @ Why can't we all just get along? : #butterdowner : #butterupper : #lipidariantoastbros : #butterpolitics : #thoughtleader
Where to, from here?
These are only a few of the high-level patterns we are considering as enhancements to all Librem One clients. Your Librem One subscription supports our team as we turn these patterns into a reality.
They build on the philosophy we’ve already outlined on our blog, under the “user empowerment” tag.
We look forward to reading more proposals from our friends and colleagues in the free software and anti-harassment communities. We are particularly interested in design patterns that honor our “no tracking” policy, and reliable (peer-reviewed) statistics that help prioritize use-cases. We are already looking at:
- Harassment research by Hollaback!
- The work of the Sassafras Tech Collective, including “Anti-Oppressive Design”
- OcapPub: Towards networks of consent, an upcoming paper by Christopher Lemmer Weber, co-author of the ActivityPub standard
- And “Protecting Children Online: Cyberbullying Policies of Social Media Companies”, by
In the meantime, and whether you are a Librem One user or not, please refer to our stay safe guide. It’s quick and easy to read, just like our policy, and we keep it up-to-date with links to high-quality, world-audience resources.
Thanks for stopping by, stay safe, and stay tuned for more user empowerment news.
The post Curbing Harassment with User Empowerment appeared first on Purism.