Holistic vs Narrow Approaches to Combating Harassment


#1

I would avoid framing this sort of team as a group working on explicitly anti-harassment features.
Not to say I’m disagreeing with the features that you specifically propose, but I think that countering harassment is a portion of our community stratigy and needs to be discussed alongside cultural support and institutions necessary to be effective in their goal. If we focus too much on solving these challenges in software features without community strategies being considered a natural inclusion I think we’re going to fail to identify synergistic solutions that will remain effective even in the face of the frailty of control inherent in decentralised systems.


Specialized development teams (and categories) for specific Mastodon feature-sets
#2

i hesitate to believe in any sort of community, frankly, and the utterly vague term “community strategies” does nothing for me. the fediverse is fucking massive, any efforts toward educating, instructing or otherwise guiding this massive group of unable to practically be organized people seems rather fruitless. i almost always focus on the software due to this bias against the very concept of community.

this all said, I’m listening if you wish to better explain, in practical terms or some sort of checklist for how to make any sort of “community” effort on this, even with all my doubts


#3

Are you familiar with the term playbook?
What I’m talking about is essentially that we need to focus on doing more than just putting out anti-harassment features, and that those features need to be considered in the context of a wider community response strategy. If we give users and admins a bunch of features they will find a bunch of ways to use them, but only through trial and erode. Instead I think we should develop them with a response strategy in mind, or else we won’t have real recourse when faced with either a situation where a bad admin acts, or in the case of a coordinated harassment campaign.

What I’m envisioning​ is a list of best practices, general strategies and techniques to avoid escalation, effectively counter technological attacks, and to coordinate with support resources to minimize truama in situations where other bullworks fail. Mastadon.social will need this to effectively scale their moderation effort anyway, but I think that envisioning this as a decentralized and shared resource that we develop in tandem with software features will increase the quality of responses across the federation.


#4

i dont mean to come off as hostile but i feel this is actually something i had thought most people considered to be the opposite of what the fediverse wants?

i do believe that (as the navy calls them, Standard Operating Procedures, note that the term is much more specific) for admins and users to have access to how to handle it are a positive thing, although i would not confer that they are a book that must always be followed, are a good idea…

this has nothing to do with a “community strategy”.

i worry i may come off as being pedantic or attacking the user of the term “community” but i still find this idea entirely unrelated to any sort of community.

a “suggested practice” or “suggested admin guidebook” is a wonderful tool but it is nothing to do with a “community”, as i understand it.

you cannot ensure that people obey this “community” strategy, and thus there isn’t really any community involved. community implies a coming together of people, “a group linked by a common policy”. this policy cannot be enforced, will not be enforced, and is thus a community toolset at the very best. and again, this would not essentially turn out to be a community, as i understand it, as it assumes a group of instances all using the same toolset, and communicating.

this explanation does not even vilify or disagree with any work towards a “community” but i just dont hesitate to point out how fragile the idea of “community” is, and advise to be very careful when using such words or dancing around this idea, as i find it counterproductive more often than not.

EDIT: i realize this might have been a largely negative post, so an addition on how to positively manage a community follows.

first, communication, as is implied in the name, and thus a chat of some sort or another to discuss upcoming changes and the like.
second, a written agreement, or otherwise some sort of proof of agreeing to the rules and regulations such a community must form.
third, a system for reporting anonymously to the community at large of any one admin not obeying the rules and regulations.

that’s about all i got, i think. i don’t mean to say communities are a bad idea, or a waste of time, i have simply never seen one work out well, and thus am largely pessimistic.


#5

Mastodon is an open source project.

That project is going to have organization, either implied or otherwise. As people are so fond to point out when someone develops a script they don’t like, tools do not exist absent their social context. :stuck_out_tongue: Even if we chose not to have advice on how to deal with the tools we have provided, we are going to encode assumptions into how we implement features. Good documentation is never bad, we just don’t usually consider that social networks are tools that also require a user manual that incorporates their social aspects. If you want to try and solve a social problem (harassment) with a software feature, you better indicate how to do that. Unfortunately, software is limited in what it can do and harassment is not a simple problem.

It requires a community strategy. It’s pointless to argue whether or not people can be ‘forced’ to abide by it. It’s not rules. It’s a strategy to inform decisions. People are free to not use it, but the availability of good strategies that work well with our tools will improve the quality of responses to the situations they are being designed to deal with. Someone else might figure out a better way, and ideally, they’d let us know about that situation so we could incorporate and improve.

I honestly don’t see another way to deal with the threat of an equally large bad actor who is equally capable of devising collaborative strategies. To develop good social software we need a culture of collaboration at both a software and a social level. I’m sorry maybe I’m not very eloquent in presenting this idea?

This would be no different than any other part of mastodon. I just think maybe it’s not something we typically think about as being part and parcel with software. But social media does need the social aspects of its workings considered, I think. It’s kinda irresponsible to give someone the tools to create a community without giving them an example of how to a safe one works. Plus, I definitely think that there needs to be the ability to tap the collective resources of the Mastodon community (I don’t think we can reasonably argue that we’re not building community. I literally talk to you people every day) in a constructive way to deal with pervasive harassment or other general crisis in a productive way. We could do that informally, like right now, but I think that would be slow in situations that could be very time sensitive…

I really am sorry if I’m not being easy to understand.


#6

You’re using a lot of very vague sort of business-esque jargon which I’m finding difficult to connect to what you’re actually proposing.

  1. Some sort of Advice for Running a Mastodon Instance guide or wiki, including community moderation, would be a good resource I’m sure, which perhaps could be a collaborative effort from the admins channels that exist. Just a way of sharing experiences and what works. The big difficulty is that one of the main concepts of Mastodon is that we are not a single community but a network of distinct neighboring communities. Every admin is going to have a different style and Mastodon the Project is just about building tools for them to have their community in whichever style suits them.

  2. Some sort of guide for users would probably not be read, trust me, we tried this many times already before we had the onboarding modals. It didn’t work. “How to avoid escalation” I think is once again a misunderstanding of what “harassment” means. These are not good faith honest conflicts between two otherwise great people. This is targeted attempts by trolls to intentionally upset people. There’s no way to avoid escalation with a lolcow forum or alt-right group. Once they’ve targeted you, they won’t let up unless you delete everything or “an hero” as they put it. When someone is doxxing you, there is no mediation or peaceful resolution.

The problems with harassment on the internet, as a group behavioral problem, are problems in society at large. There’s only so much we can do to manage the entire English-speaking internet, especially when designing software for a vast multilingual userbase.

What you keep bringing up, to me, seems off-topic to developing software features which is the conversation we are trying to have. It think what you want is best achieved from joining or running an instance like awoo.space, which has very intentional community guidelines and conflict mediation practices. Our job, as developers, is to give people the tools to make a community like awoo.space or a community totally unlike awoo.space. Their choice. We have to be responsible in what tools we make but that is still our role, tool makers. We cannot control a decentralized system of entirely distinct people.

There is no “Mastodon Community” there are communities which use Mastodon. Do you regularly talk to and know the large community of Japanese schoolgirls? The French Channers? The middle-aged white fathers community? There are lots of communities in a vast connected network, though not entirely connected, and they all have different cultures and values. We aren’t here to enforce any ways of being onto other cultures based on what we think is best. We just have to try to design software that protects users from abuse and does not make abuse easy.

The topic of this thread is to discuss the creation of specific task groups to work on certain feature sets. Like working committees. One of which would be an anti-harassment feature development task force.


#7
  1. Yes. I’m expecting that this can change or even be ignored by communities as is determined by their individual norms. However, I also expect their existence to increase the quality of the responses.
  2. Not really anticipating this use case. Users would benefit mostly from informed moderators rather than directly from the manual.

The reason why I keep bringing it up is because I think that developing features absent of good ideas of how they fit into the whole is bad. It risks over-promising on the feature front and changing user behavior in a way that can be easily exploited. Consider the current feature of privacy settings. Realistically, they depend on the goodwill of the receiving party. That’s why we put warnings into the UI. But, what do we do when those privacy protections do fail? What happens when an instance falls into complacency with the warnings and lets their guard down?

I think that features that explore both how a tool is going to be used as well as its goal are going to be necessarily better for this task. I think that the task is uniquely dependant on having good people processes as well as the simple availability of features. For instance: Through thinking about the dynamics of a targeted harassment campaign It seems to me a very useful feature for both a user and an admin would be to freeze posts from newly federated instances, or freeze federation generally while they clean out the timeline. I don’t think this is necessarily intuitive from a feature first outlook.

Basicly, I think that defining this workgroup as an anti-harassment feature development group denies the scope that is necessary for its deliberation. Does that make more sense?


#8

Err, now I’m even more confused.

The features we’re working towards are in response to specific problems, as we engineer to solutions to them. We aren’t just… picking them out of a hat. There’s a lot of intent going into these designs especially around how people use features to make them good. I don’t know why you think we aren’t? What you’re now saying just seems very unrelated to what you were saying earlier.

If you want to work on something that is not anti-harassment, then that is simply off-topic and you can work on it in a different space than this hypothetical anti-harassment workgroup, which has the purpose of being a specific focused group which works out logistics. Meta-conversations don’t belong in workgroups, that’s typically something discussed in a more zoomed-out space. If every attempt at a logistical conversation around UX design or threat models is going to be derailed like this then I’d rather not work in such a hypothetical work group.

I want conversations which start with threat models, proceed to solutions, working through how effective it would be and what it wouldn’t be able to cover, then proceed to the specifics of how the feature needs to behave, designing the UX and language, and implementing the code. I want to work with people who share expertise and experience in caring for vulnerable communities, designing software features, and coding. If someone in inexperienced or uninterested in the subject matter, there are better ways to get involved than a specialized focused work group.

As for your idea around federation pausing; this doesn’t take into account that a lot of harassment does not come from people within the fediverse. Lolcow forums are, by their nature, not on Mastodon. They’re forums on separate websites. They do this to avoid moderation and law enforcement. It also depends on a fast-acting admin; which may not always be the case. It’s worth discussing further as a proposal down the line but right now we already have a good chunk of features to work through just to get on par with standard and commonly expected anti-harassment features for users. So my intention is to start with those and proceed down the line, seeing along the way what threats we still need to address.


#9

focus too much on solving these challenges in software features

I am not exaggerating when I say I have literally never seen any project or organization focus “too much” on anti-harassment measures. Ever.

Not one.

But I’ve seen several turn into shit from lacking focus on them.

It’d be fantastic if somehow all harassment issues were settled with the harasser admitting they’d done wrong, apologizing, and straightening up. It’s also unrealistic. Per elsewhere in the thread, emphasis mine:

Lack of focus on anti-harassment tools has wrecked thing after thing by allowing shitty harassment groups to run rampant, gain a solid foothold, and shut out all manner of folks.

You’re concerned about unintended consequences of anti-harassment tools, and concern about unintended consequences of anything matters to some degree, but it’s starting to feel like you’re intentionally missing this point. To be bluntly clear:

Some harassers do know better. They fully intend to cause harm. They want to hurt people, and even if that’s out of a greater lack of understanding, it is malicious.

That’s not hypothetical. That’s reality.

If you want to fix that – I cannot for the life of me imagine how – then you will have to start doing outreach to harasser groups, to tell them the errors of their ways.

Good luck – sincerely, if you can get those people to stop, more power to you – but Masto as a whole isn’t about ‘fixing’ them, and this discussion isn’t about ‘what if we talked to them’. It’s about triage and prevention for their victims.

‘Anti-harassment’ sums that up pretty neatly, IMO.


#10

The limitations of reactive design are exactly why I’m proposing that the scope of anti-harassment software features is too limited…

I don’t disagree with your proposed model of development, i only disagree with limiting it to software features, and in defining it in opposition to something rather than as its own goal. Why are anti-harassment features important? It’s not just because they counter harassment. Making them simply a side in a cat and mouse game is going to keep the stakes of that game high because the focus is always going to be least-effort counters rather than pro-active solutions or long term stratigy.

This is exactly the same problem that is facing the information security industry. Security solutions are pushed as software solutions in the cat and mouse game of defend and attack so when those software solutions break down the damage is massive.

Also you misunderstood the feature I proposed, but it’s honestly besides the point as I was giving an example of lateral considerations.


#11

Everything breaks down over a long enough timeline.

This isn’t a “problem facing the information security industry”, this is a truism about all things. ‘Permanent’ is relative. No measure is ever going to suffice forever. Period.

Defining it in opposition is because it’s opposed to a destructive thing. By definition, lubrication is ‘anti-friction’. It’s still positive despite effectively being ‘opposed’ to something, because it’s opposed to a uniformly bad thing.

Honestly – and maybe this is just me – but the discussion of why it shouldn’t be named such-and-such, why it needs to be ‘less reactive’, and all the elements outside the scope of software features – it all feels unproductive in the face of the fact that anti-harassment is what it will be. Nobody’s holding some illusion that it’s going to be a sewing circle or something.


#12

Which is why you need a plan for it. Unfortunately, that plan doesn’t usually get made because of the focus on software features. Again, reactive design compounds the damage of the inevitable failure because avenues outside of the least effort solution are not explored. It ‘works’ (stops the harassment temporarily), okay stop thinking about it, move on to the next solution until a problem arises.

This mode of development takes a shit ton of high impact failure to get decent solutions. High impact failures that could be minimized with community support and considerations of the community response when the tools are designed.


#13

Which is why you need a plan for it.

Planning ahead can’t happen until the discussions about current plans fully crystallize. It’s hard for that to happen when – with respect – the value of the endeavor itself keeps needing to be proven in dialogue.

If it were a house, and these were blueprints being drawn, constantly asking the people drawing them how much they really want to build a house a dozen different ways isn’t helpful. It just makes it harder for your feedback to be heard, because there’s so much to try to take in, and it paints your input as potentially antagonistic, instead of constructively adversarial.

Constructively adversarial: "I know there’s plenty of soft pine, but we should use a hardwood, or a house this size may collapse. Though we could counter that other ways, too."
Antagonistic: “Why that shape? Why those nails? Why this doorway? Why this board thickness? Why these support beams? Why this land? Why these windows?”

I exaggerate only to clarify the differences intended between the two input types.

I get that you are still concerned about unintended consequences of anti-harassment features. I truly do. But I think you’re hurting people’s ability to hear your concerns by questioning every possible aspect of this part of the work.

So, in my opinion, if you’re really worried about those unintended consequences – you might want to pick your battles more selectively.

Because finding fault with every step of the process makes it seem like your goal might be to slow it down, not help prevent problems with it.


#14

I think I’ve identified the breakdown in communication: this is a thread specifically for talking about software feature development. Non-software approaches to handling harassment problems are beyond the scope of this thread and belong in a different discussion.

[moderator: as the thread has been split off, this is no longer the case]


#15

@noelle
There is not a meaningful distinction between a technical and social feature when you are trying to solve a social problem with software. The discussion will and must include both aspects. You are not discussing how to best deal with misbehaving programs, you are discussing how to handle misbehaving people. Ignoring this will only hurt the effort.

Edit to avoid double post

@sydneyfalk

This isn’t my argument so I’m not really sure if I should respond to it.

Specifically I think that defining a group’s scope too narrowly and defining a goal contingent to it’s antithesis are self defeating practices. That’s nothing to do with unintended consequences, unless you are considering features being worked around “unintended consequences”. In which case, yes:

I do think that you should be worried about what happens when a bulwark fails, I think that you should have recovery plans accessable and easily supported by the software and I think that process should take place in tandem with the development.


#16

@Irick, what I think you misunderstand - I’ve said this before, but it bears repeating - is that very few people in this conversation, if any, are new to handling social problems. This is not a naïve approach; there is a large body of experience coming together to address the problem of harassment.

The conclusion of my experience, at least, is that when people misbehave online, they do so in predictable ways. In that respect it is very similar to writing software to deal with misbehaving software: we can observe a pattern of misbehavior, treat it as damage, and route around it as a stopgap so that we can interrupt the pattern. Additionally, if diverting around the misbehavior fails to stop the problem, we can observe that and adapt the systems to accommodate new solutions.

The objective of this thread is to discuss software features, and with regard to harassment, the specific features that we need to implement that identification, routing, interruption, and adaptation. I am telling you as a moderator that non-software considerations belong elsewhere.


#17

The title chosen to split off this thread, “Technical vs Social solutions harassment”, is problematic and endemic of exactly the issue I was outlining.

Consider the below a more representative form of my argument:


#18

I think that technical/social split is too broad. We should consider the following capabilities:

  • The protocol - the actual protocol used to exchange data between instances
  • Client conventions - commonly accepted conventions among network clients (for example, if a tag provided is #nsfw , hide all images). Or: do not send more than 500 characters
  • Network association rules - who and when can connect to the network. What are the situations when a node (instance) can be disconnected
  • Moderation conventions - Per-instance rules about who can join the instance, what kind of messages can be sent there, and when the user can be forced to leave the instance. For example, “a fifth letter of a latin alphabet is forbidden”

I think we generally operate on one of those 4 capabilities.

Additionally, I am skeptical about the possibility of running “projects” on such a distributed network. Certainly there will be awareness campaigns and projects evolving around software development of software tools (for example the original Ruby-based Tootsuite). But I doubt planned project-like activities can be run large scale across a whole network.