Add option to report a user to the instance admin/mod of that user too


#21

It might be easier (if the report counts idea is selected) for the admins to simply ignore the counts regarding users they have decided not to moderate. It might be better general practice than writing off specific other instance admins’ feedback wholesale, IMO.

The report counts would still give them the information, and their decisions would be on them – and it wouldn’t require heavily anonymizing stuff, wouldn’t require explanations that admins might feel insulted by, and wouldn’t require a ton of logging or anything either. Just a list of users and a count of their toots that were flagged in unblocked instances. (My thinking is that if instance A blocks instance B they probably won’t care about instance B’s reports at all. Maybe I’m wrong. :slight_smile: I often am.)

I realize counts may seem like scarce data, but usually the users being reported are being reported for reasons obvious to an admin upon actually looking at the user’s toots, from what I’ve seen.

If the point of the reporting is to give admins information that makes them more familiar with their ‘controversial’ users, and make decisions about them, reports won’t do it as well as actually having to read their toots, IMO.


#22

Sure, but this is a discussion between admins about troublesome users. I don’t know that it needs a whole heap of privacy, the leadup to a banning could stand a little transparency, right? Maybe? I dunno. I think it’s very situation dependent. But if we consider my idea of a notification only, an alert and link to a report log, the communication could be handled via email or any other medium.

As for report counts only… I don’t know that it carries enough detail to be entirely useful.

I think, at minimum, the admin should be alerted to a problem with a user, from another instance. Moving up the complexity chain, they can be invited to review the concerns, admin conversations initiated, then action taken. The more of these things we do, the happier I am as an admin.


#23

This is a good idea only if every instance of the whole fediverse have the exact same moderation rules.
But it’s not the case, so it will only end up spamming or pressuring other admin into moderating content they might agree with for the instance who don’t share your moderation rules.
Because if you keep getting report for a content you don’t see any issue with, you might end up moderating it to get rid of the report spamming.

For example, some instance will find a toot with a sexist joke as needing moderation, but that’s not the case for all instance of the whole fediverse. So reporting it to the other instance admin will just spamm him and pressure him (yes getting reports is a form of pressure) into moderating this toot he don’t see any issue with.

I can understand that some might want this but you’ve got to make it an option and an opt in as some already said.


#24

The question here is are reports becoming toots, like a regular messages in the network. If yes, they need to be rate-limited, blockable etc. like any other message. Any form of communications can be used for abuse (even password reset messages!) so if they should federate indeed they should be treated the same way as toots.

If the things are kept local, local admin will know what to do when they are abused.


#25

Dostoi, I’m not sure if you’re replying to the start of the thread or the current state, but:

If reports are sent as toots, they can be muted or silenced easily.

There’s been a lot of talk about a ‘report’ somehow forcing action by the receiving admin, you’re not the first to suggest this, but I think that’s bizarre thinking.

And if any admin is happy to be unaware when his users are assholes across the fediverse, I think maybe they don’t have any standing when another instance silences them. Being an admin is a responsibility. If you don’t want to be part of the community, you don’t have to be, and if you opt out like this and ignore other admins, that choice will be made for you.

I’m not trying to be a dick here, but this is just obvious, isn’t it? None of us are going to take shit from other admins, none of us are going to bend to their rules and whims, but IMO if you’re not at least open to dialogue, if you’re intentionally unaware, then you can’t be upset when people make decisions for you.

Saper: agreed. Hence the digest idea. Rate limited notifications, with updates as required on the digest.


#27

Ok, but what if reports are used for abuse and the admin of the victim account is overflowed with them? We should always think that reports are dual-purpose messages. Never assume their are right because somebody hit that button.

Thanks. Federated reports will be nothing different from a toot, except for the complainingAboutUser or complainingAboutToot metadata. Maybe they could be just converted to by a hashtag? (Privacy issues aside).


#28

a discussion between admins

A discussion that’s between admins – but where it’s been demonstrated some admins aren’t administrating in good faith previously, and there will be future bad faith admins.

I absolutely think the banning process needs to have transparency leading up to it – but ideally this isn’t at the expense of non-targeted users (targeted users are already targets). Also, if the transparency can be abused by a bad faith admin in some way, that’d be dangerous.

The raw truth is that anybody can set up and proclaim themselves an admin – if creating an instance gives a bad faith actor ways to get target information, then I’d suggest the report system not provide target information.


#29

This is part of what I believe the count system would help prevent. Notching up a number in a list by a thousand doesn’t have nearly the impact of filling someone’s feed with a thousand toots.


#30

And if any admin is happy to be unaware when his users are assholes across the fediverse, I think maybe they don’t have any standing when another instance silences them. Being an admin is a responsibility. If you don’t want to be part of the community, you don’t have to be, and if you opt out like this and ignore other admins, that choice will be made for you.

This is an excellent sum-up of the issue. Admins are also mods by default on most instances, for good or for ill, and thus effectively ‘signed up’ to contain/prevent assholery.

If you’re setting up a pipe into the water supply that isn’t actually being curated, and what comes out is a half (or even a tenth) raw sewage, those downstream are going to come looking for the problem, and block off the flow of offal some way eventually.


#31

Not really. If you get reports for every accounts of your instance because they want to harass you (and we know some people are ready to do that) then tough luck, you can get a lot false reports. Note that grouping them by instance won’t fix the issue or whatever other field won’t really solve the problem.


#32

If someone gets a count of people who have reported, sortable by instance or by account being reported, that’s fixable. Sort, eliminate the ones the admin in question chooses to ignore. One line in a spreadsheet.

Honestly, I don’t think granular, per-toot, explanation-included reports are going to make things better than indicators of where admins should look for trouble with their own eyes.

But that’s just my take. :slight_smile:


#33

What if people create virtual instances to make mass reports?

Honestly, I see this feature mostly as an harassment tool than a practical solution.


#34

Anything can be a harassment tool. The ability to mass-spam an admin with PMs and tags already exists. This won’t suddenly become worse if we make something else better.


#35

I’m perfectly fine with entertaining hypotheticals, but since people can already make virtual instances to specifically harass admins anyway, the point is moot. (Besides, what if people create instances specifically to join in the admin discussions and try to sway them more by throwing around a userbase? Anything’s possible.)

IMO, this is another reason the counts per instance report would be better – new instance on the list? Check that instance out, find out it’s created yesterday along with the last dozen that showed up on the list. Ignore that line. Move on.

Honestly, I see this feature mostly as an harassment tool than a practical solution.

With respect, harassment is about use of tools, not an inherent feature of a tool.

Also, harassment is already something admins talk to other admins about. If it’s all in a single short report, I’d imagine that’d reduce the amount of conversation necessary and make it easier for admins who do want to ignore their users harassing others to do so.

Even now, without reporting, there are admins who are recognized for explicitly ignoring harassment issues. It might get easier to do so if there’s federated reporting, but it still happens eventually anyway.

Some admins are going to disagree about what ‘harassment’ is. Making those conversations happen more quickly isn’t a bad thing, it just saves time getting to an inevitable conclusion.


#36

If/when I am an admin or mod, I want to receive reports about my own members from other instances. If there is risk of harassment I would like a way to block individual people (from my or another instance) from making reports to me, and if necessary, entire instances.


#37

How would those “counts” work if some genuine considerable number of good faith users sends a complaint to the target admin?


#38

This doesn’t seem difficult. If there’s a timer on how often reports can be sent (daily, weekly, or perhaps something like once per day per offending user, etc) then that message, or the digest report view, includes the number of reports.


#39

I was imagining it as a count of users on federated instances (perhaps with a breakdown that showed “not silenced” as a separate count, but I don’t know how controversial/useful that’d be, either) that also showed a count of users broken down by instance.

In my mind, it’s sort of like (and keep in mind, these are probably highly unrealistic numbers):

  • FOR REPORTS BETWEEN 12:00 AM GST 06/09/17 AND 11:59 PM GST 06/09/17
  • user @n: reported 0 times
  • user @z: reported 2027 times: (5% of user reports)
  • – 1620 by @mastodon.social (12% of user reports)
  • – 407 by @yet.another.masto.instance (78% of user reports)
  • user @q: reported 23791 times: (22% of all user reports)
  • – 19012 by @mastodon.social (32% of all @mastodon.social user reports)
  • – 1092 by @another.masto.instance (13% of all @another.masto.instance user reports)
  • – 503 by @yet.another.masto.instance (9% of all @yet.another.masto.instance user reports)

This would, IMO, make typical targeted harassment patterns in any direction stand out more. The most overt ones might look like this (contrast with the above mock report for @q):

  • user @q: reported 26891 times: (25% of all user reports)
  • – 19012 by @mastodon.social (32% of all @mastodon.social user reports)
  • – 3100 by @highly.reporting.instance (87% of all @highly.reporting.instance user reports)
  • – 1092 by @another.masto.instance (13% of all @another.masto.instance user reports)
  • – 503 by @yet.another.masto.instance (9% of all @yet.another.masto.instance user reports)

It’d also mean that nothing here says “bad user” or “bad instance” automatically – simply that highly.reporting.instance clearly finds something objectionable about user @q, and the admin probably wants to go look at user @q to see if they feel okay about whatever @q is doing.

If the admin’s opinion is that @q is within the ToS, they’re free to ignore the reports, and they know which instances have the issue with what’s being posted, and if they don’t like reports from specific instances, they can skip over those lines once they see the name – the information is still there, they have the choice of ignoring it.

This also might help find instances that are started to hide harassment – if six instances are started to cover harassing twenty users, each one of those users is going to stand out as suddenly getting a bunch of new harassment reports, and from those specific six instances. (But neither detecting harassment nor forensic accounting are my areas of expertise, so I’m guessing about that.)

Also it’d probably work better as a spreadsheet of some form – sorting it by “total reports” would allow looking for the problem users, comparing it to the old reports would allow vigilant admins to look for booted users who start new accounts and engage in the same patterns, etc.