[old discussion] Account Domain Blocks

@saper
Do you mind clarifying the block feedback mechanism you’re referring to?
I’m seeing a possible ambiguity between local instance feedback and remote instance feedback.

any form to letting the blocked entity know it is blocked

Thanks for the clarification on muting and blocking @beatrix-bitrot. I agree that both are useful and am delighted to hear that you’ll do a “strong blocking” version even if it doesn’t make it into mainline. I’d certainly encourage the admins of the instances I hang out on to press for including it in mainline (or use your fork if mainline doesn’t adopt it).

[Also I forgot to mention earlier, glad to hear that instance-only toots are on the roadmap!]

@maloki I very much second the request to clarify the language! I’m also hazy on the relationship between “muting and blocking” (which we’re using here) and “silencing and suspending”. During the soc.techncs.de incident, it seemed there was a lot of confusion about the behavior of suspend in the Matrix chat room.

3 Likes

I think that admin mediated discussion without systemicly notifying the blocked party (e.g. the admin of the hosting instance can determine if the remote instance is a bad actor prior to initiating contact) is a way to at least lessen the abuse potential of the feedback channels. I think that those feedback channels must exist though, even if they are selective, if we are going to allow for the process of community correction.
@sydneyfalk
I agree with your analysis, though I would point out that it is trivial to systematically bypass a domain block as well. That’s why I’m not in favor of alerting the blocked instance at all. However, I do think that it may be useful to explore the utility side of this low-pass filter feedback and put it up against the risks.

In theory yes. But it just creates an additional communications channel behind the block.

Wikipedia has a dedicated forum to discuss block removal

But this is about admin-initiated blocks, there is no user-originated muting or blocking.

counts

I’d think for instances of more than some number of users, counts would tell admins exactly how much of a problem their users are being. Could even be a count/percentage. And the instance admins knowing the blocks is one thing – unavoidable. (Even though it opens the possibility of trustworthy admins falling prone to singling out users and complaining about them, as happened a few times now, IIRC.)

Below that level, over time, I’d imagine it makes it (somewhat) easier for organized harassers to try to find specific blockers – they might watch activity levels, test with posts to elements on the list over time to see what kind of responses they get, slowly pooling a probable list of people who’ve blocked the instance, to use on another instance’s more targeted, refined harassment campaign. (There ARE groups that systematically work harassment angles, like kiwi and some sectors of encyclopedia dramatica and 4chan, after all – no reason to assume they wouldn’t apply data mining eventually to it. No reason to give them an easier time with small datasets.)

Perhaps a lowpass filter would be useful here – the stats individual admins get about instance blocks don’t have instances that have usercounts in the lowest 10% of instances, or 15% even? A factor in that should also be levels of activity, I’d think, too. (A large-user instance below a certain threshold of daily posting is going to provide similar opportunities to a small-user instance with high volume because it’s unlikely all the users are simultaneously dropping to only a post or two a day – it’s more likely some are just going quiet and some are still full-stream, and they’ll stick out in the data. And I hope it goes without saying that in that model, single-user instances shouldn’t be laid out individually, but as a count of single-user instances, to make someone who sets up their own instance less immediately an obvious target.)

(sorry if this is off base/duplicating other points about counts, I spent a while working on it to refine the idea)
(posted again to get out of ‘reply’ – wasn’t meant to be a reply)

2 Likes

(long, solely concerning the validity of single users being able to permanently block instances, slightly gross hypothetical near the end)

I’m not really interested in continuing to defend the base validity of my concerns.
This conversation is not worthwhile.

This is what people are trying to do, not only to avoid bad-faith/lax admins (or organized harassers), but to avoid detritus and validity arguments they’re not interested in having (frex, some users from the set of users that talk music a lot might block some (hopefully imaginary) instance called deathmetal.fore.ver – they’re not into death metal, they’re not interested in getting interested in death metal – so it’s detritus to them. Why should they block every single deathmetal.fore.ver user to keep detritus out?)

They’re trying to step out of a conversation that is not worthwhile for them – and that’s a decision they should be able to make, just as (arguably) you seem ready to make it here about a specific element of the discussion.

From there, I want to move on to how can we architect a solution that preserves that without making prejudice a button click away.

If anything ‘prejudice’ being a button-click away is the current state – harasser finds an instance they can harass from, gets account blocked, sets up account after account on the instance because they can. Then you’ve got the birdsite problem, because they simply don’t give a shit about harassment. (I guess they’re watching money-counting machines? I dunno what birdsite folk actually do behind the scenes for actual moderation, very little from what I’ve seen.)

As for not-harassment stuff: Time is finite. Nobody has the time to try to understand every idea. People have to curate if they want to focus themselves in specific areas.

On the ‘discussions’ people need to have to spread social awareness: Nobody should be obligated to educate anybody, or even read anybody’s statements.

This goes double for the people who think white supremacy has value, or “it’s not natural, animals don’t do it” has value. It’s certainly a good thing for people to try to educate them, but it shouldn’t have to be anybody’s daily work out of obligation. This lays the burden on the people most likely to already be struggling with giant burdens, and often people already systematically denied access to basic necessities.

Frex, I shouldn’t have to explain, again and again, why white supremacy is a tragedy waiting to happen for all of humanity, or how anthropologists straightwashed their data for ages about queer couplings raising offspring from straight couplings in nature settings of all sorts, or that a flat earth is flatly nonsensical. Yay if I do, yay for others who do – but if it’s ‘required’ to be part of my conversation, I think that’s wrong.

It’s Harrison Bergeron as conversational model, where, say, every queer person has to justify, to any straight person who asks, why they’re not “really straight” if such-and-such. Or why straight Pride doesn’t exist.

Those things are already part of everyday life for folks in a lot of cases. RL enforces this awful justification effort of oneself. Masto, IMO, should not.

Instance blocking would cut a wide swath of it down. It would shut out somewhat more organized harassers from harassment targets, as well as allow people to simply shift out smaller instances that are functionally going to foster only conversations they already have on a regular basis.

These are some other problems people are trying to address with this feature, I think. The fact that the features can be used by 'phobes and harassers to block out victims, and ‘codify prejudice’, does not matter to them. 'Phobes and harassers don’t block the people they want to upset. They want to keep upsetting them.

Hopefully this clarifies the intent involved.

You have to remember that the crowd of humanity is still made of individuals, making individual choices. They should have the option of pulling curtains on certain windows.

but when it comes down to it I don’t trust people to be able to hold themselves accountable for their own actions

That’s kind of the issue these are meant to address – the people who don’t hold themselves accountable for harassment and small-mindedness.

a stop gap for a more socially aware process

The problem is that the “more socially aware process” is often going to end up, in large part, siding with the status quo. Used to be, people complained about getting shit for being queer and were told that visibility equalled harassment. Now there’s usually more of sympathetic vibe about it, at least. But at first, it isn’t always going to be constructive, and it is never going to be something people should be forced to participate in regardless of their wishes. It’s a lot easier for a group of people (good or bad) to decide the outlier is the problem, not the current state of affairs.

Should the ‘more socially aware’ process happen? Sure. Should the users who blocked that instance be forced into unblocking, or pestered into it, to help them ‘understand’? Hell no. That’s nobody’s job but theirs. You can’t “make” people want to understand. (IME, you can barely lead them to it and get them to understand at all, even when they sometimes want to.)

Whether we ‘trust’ people to make the right decisions or not, by definition, is anathema to the concept that we have something to learn from them as a ‘group’ anyway, IMO. On some level, if you don’t trust them – why do you trust what they’re saying? Why would you trust that they’re acting in good faith?

You make that decision yourself. I don’t make it for you. I wouldn’t want to, in fact. And I don’t think the system should be designed to force inclusion of an instance’s content (or repeatedly suggesting!) if it’s a connection that isn’t trusted. That’s going to upset (and potentially punish) those who were seeking to get away from unproductive conversations. It’s also going to be treated like a goal by harassers. “How do we get just polite enough to be able to endlessly pester them about this?”

Transparency for those controlling the systems, definitely. Users should still be treated like private citizens. Closing the curtains of my window to part of the world should not be met with messages, regularly, about how that area’s ‘safe’ to watch now.

Nor should it be met with people removing my curtains. Those are my curtains and I chose to pull them. Respecting that is extremely important.

I could see a single notification spread to the users of X instance only, letting them know a pattern of abuse was detected coming from Y instance, the admins discussed, here’s what was decided. One notification about the curtains being unnecessary because the performance artist outside that particular window building a frozen vomit snowfamily has finished, or his stuff’s been taken to a gallery, or he gave up and it melted – I could see that.

At least, that’s my take on it.

(apologies if I’m covering points already handled by @shel earlier – I did read but may have lost track of some of it composing this)

3 Likes

@saper
Yeah. I do think user level blocks are a different beast.

I really don’t know a good place to go to see an analog. Honestly, Usenet predates my existence by quite a bit so that’s not really a parallel I can draw…
I know that Wikipedia’s process is janky but I do at least like their commitment to opening up dialog around the idea of people changing.

I mean, I don’t really think it’s reasonable to expect users to commit to a long manifesto or following something like Wikipedia’s process documents, but I think that’s something we can at least make available at the admin level. We know that admins kinda define the moderation philosophy of their instance. If we are going to take, as you proposed in your initial post, that these domain blocks are actually meaningful I think it follows that that has to be the case.

I think it is, therefore, reasonable to trust that admin to be able to determine if dialog with an instance is in the best interest of that community. And I think we can do that while still protecting the users facing harassment by making these actions anonymous, but still quantifiable to that local admin. This is not exposing data that the admin would not be able to pull with a simple database query, it’s just making it visible and actionable and anonymous by default.

1 Like

However, I do think that it may be useful to explore the utility side of this low-pass filter feedback and put it up against the risks.

Everything’s got a trade-off. I value individual privacy options and individual filtering over whether or not it’s trivial to bypass.

Front door locks, frex, are trivial to bypass if you know what you’re doing (or if you don’t, in some cases). Most can be kicked open. Front doors usually still have locks, because there’s no perfect security solution.

The goal of this, or any measure, is not to somehow eradicate the problem – that’s impossible when humans are part of the problem involved – but to deter the problem at a reasonable cost. So if you have to kick in someone’s front door to harass them, versus just doing it in front of their window, it reduces harassment.

If the ‘cost’ is that, overall, people won’t be exposed to as great a diversity of people, that is unfortunate – but considering the up side is ‘deter harassment’, and a fair chunk of the people who aren’t harassers but still ask invalidating questions don’t actually want to be exposed to the vastness of people, IMO, it’s worth it.

1 Like

Temporarily Locked Thread, will be unlocked soon with a message from the admins.

I am afraid I am confused and I find it hard to contribute anything to be useful. Can we try to put some structure here, like try to list threat models and network features that can be used to address them?

Thread was locked because we wanted to conclude this discussion tonight, however we have had a change of heart, but before unlocking I wanted to write this message, as a lot of the thread has been kinda outside of the realm of the intended purpose of this thread.

Very little focus in this thread has been on solutions, neither from the parties concerned, nor the other side. We can agree that there are two fairly strong camps in the question of this feature. But as I initially posted, and have said elsewhere, there is no question regarding if this feature will be in mastodon, it is here to stay. The only think we can discuss now is what can be improved or changed, if anything needs to be improved or changed.

We were however beginning to go in the right direction, which is why we are reopening the thread. With a warning to stay on point about talking about functionalities of the feature and threat analysis, and other details which concerns that.

To help you continue the discussion in this manner, I will quote a few items which are in the right direction:
Regarding local only posts:

shel’s request to @Irick, you later asked “what problem is this feature attempting to address”, and the answer to that is in the thread as a whole, several times.

We initially wanted to lock the thread after this response which we (me and other admins) found to synthesize the features functionality well.

You may now continue.

1 Like

threat models

I should have made this clearer. I’m sorry.

  • Harasser (or small group of harassers) finds an instance with a lax-or-bad-faith admin. They each get blocked by people they harass, but because the admin isn’t policing them, they can set up account after account on that instance, or even go further and set up extremely conservatively designed harassment bots with accounts. The person in question cannot possibly keep up with it, but until the admin’s dealt with, they will experience harassment.

  • Harasser starts an instance for the purpose of harassment, allows and encourages non-harassers to join the instance to mask the fact harassers will be operating there, and ignores harassment or takes placating non-measures when harassers are pointed out. If they’ve cultivated enough users to mask the intention properly, this may simply look like they’re ‘bad at the job’. If they ‘police’ the users by deleting them but purposefully ignoring that the harassers might set up new accounts, they might look ‘good faith’ for a long time.

In either case, the user-level instance block cuts down the impact to targeted individuals, potentially to manageable levels or levels deterring enough to make the harassers seek easier targets. (It may also make it easier to locate lax/bad-faith admins, if it lets the local admin identify which instances are sourcing most of the harassment (i.e., which ones are getting blocked by their users often)).

Whether the admin blocks a ‘trouble instance’ is their call, but I say that with the assumption of transparency. You do it, you have to give the reason – unless it’s a single-user space. IMO, single-user spaces shouldn’t have to disclose about instance blocking at all, unless they open up.

3 Likes

Thanks. That’s very useful.

I have a question: to do those things above it takes a lot of care and effort. For example “encourage non-harassers to join”, taking “placate non-measures”, and “mask the intention”. I’d assume we discuss a deliberate and well-prepared effort here with some resources to spare on them.

Why would they stick to a single instance (new or captured) at all? Spreading the same effort across 1000 instances (say few accounts each) could probably by much safer? Even if some or most admins react, many won’t or the cases will be borderline and the perpetrators may get away, unless a large operation will be organized between “good faith” instance admins to identify IP addresses and other browsing environment features of the attackers.

1 Like
  1. Should Mastodon support some kind of reconciliation on a domain block?

I assume this means ‘user blocks of instances’, and I’m sorry if I’ve misunderstood.

Personally, I’d say reconciliation between user/instance is too granular, too fraught with abuse possibility.

Enforcing contact would be a mistake in this context, IMO. The users should be treated like adults. If they want to give the blocked instance another spin someday, that should be up to them.

2 Likes

They might not, and indeed, over time, they guaranteedly won’t. There’s going to be trade-offs – and accounting for a better distributed attack would require other kinds of safeguards, such as the organized defense you describe. (Which, while I fully advocate it, is well beyond my scope here. I am not an admin, simply an interested user.)

However, even in those circumstances, instance blocking can help staunch the flow in the meantime as a temporary measure, and I think it’d end up being used in those cases often. (The resolution of defense efforts at the admin level would also be a notification-worthy event, I’d imagine.)

1 Like

@sydneyfalk please don’t apologize for backing me up. A conversation is more rich with more voices trying to express their thoughts from different, angles, perspectives, and methods. It prevents burnout and better represents a larger consensus.

Here is my threat model, why I want to be able to block a domain (not mute, that is, I want to disappear from their perspective.)

Let’s say, there’s two people. Let’s call them, uh Fl1ght and Faith. They’re the owners and admins of, say, a G4m3rg4t3 instance; and maybe also another sister instance generally full of alt-righters as well, many of whom go on ED and KF. These are real instances with an active userbase. These users have a tendency confront and harass users on Mastodon (These instances are, perhaps, postActiv or GNU/Social instances, maybe even Friendica. They don’t necessarily support any Mastodon-specific features.) Fl1ght and Faith are, in fact, the leaders of their harassment brigades. Fl1ght screencaps posts to make fun of people and directs their users at them. Faith doxxes people she doesn’t like and even blackmails them. They’re scary, dangerous, people. Catching their attention may result in being turned into a “lolcow.” Now, I am a user who has been targeted by Fl1ght. I know that Fl1ght is the admin of his instance. I don’t know how many users are on his instance and there’s no way for me to go find each one of them and block them one by one. Ideally I want it to appear, from their perspective, that I’ve stopped using Mastodon. They’re an active community and they talk about more than just me. I want them to just forget about me and move on. Being able to block their entire domain, such that my posts do not appear from their end, without them knowing I’ve done this, is simply a quick and easy way to silently disappear so they’ll move on and forget about me, and gives them no incentive to continue bothering me as it does not provoke them further. It’s easier to do this than to individually block every user I see from that instance each time one shows up.

This is my use case for why I want blocking of domains to be a block; rather than only being able to mute.

2 Likes

I just wanted to pipe up that I had a good conversation around my analysis and identified a foundational flaw in my internal model. I had it in my mind that there could simultaneously exist and not exist a feedback mechanism between the person choosing to block and the wider community. I assumed the worse case (that someone could simultaneously not leak information about their choice and influence others to make the same choice) and was so scared of the result that I failed to catch my mistake. Talking it out helped me realize that… pretty basic flaw.

I don’t want this to take away from any of the other concerns raised, but this at least helped me adjust my model to a point that I feel the ability to do user level instance mutes is likely to do more good than harm.

As this topics most virulent parts have been settled. I think it is worth to close this specific thread, adjust the title and then bring up other parts of it in new threads.