No worries – defensive thinking is my reflex, but I grasp it is not that way for all people. (And, in truth, I envy others for not being that way most of the time.)
Muting would also prevent the harassed from having the option of seeing if harassment has died down in situations where slur words are used for other purposes. I know that specific slurs get reclaimed within some segments of some communities (‘dyke’ being one of note in the lesbian community) and if a thousand harassers are screaming it, that count will be the metric for when (and if) it will ever become useful for me to un-filter.
Thinking more on it: Two kinds of mute – one where you turn the volume down to just counts from those you follow, and one where you mute the specified topic entirely, no counts at all. Maybe ‘noise reduction’ is a better analogy for the former than ‘mute’, really.
To be clear, I’m suggesting the possibility of two filter types:
- ‘Noise filter’, which leaves only the counts-from-followed on a topic
- ‘Mute’, which kills any hint of those topics
(I don’t quite understand how topic muting works RN, so if I’m suggesting a ‘mute’ which is what’s in place/what’s planned, my apologies.)
One solution is to say like “Only filter this word out of my notifications from people I don’t follow” just like how we have “only show boosts from people I follow”
EDIT: wait a minute apparently we don’t have the ability to only see notifications from people we follow? That’s… something we really should add that would also undercut a lot of harassment and add a lot of notification filtering. Twitter has this and it’s super useful.
I like where this discussion is going.
Does anyone feel like synthesizing it into a GitHub issue?
Actually, I just connected with a great server wide feature as well.
As we’ve been trying to coordinate with the rest of the fediverse, regarding sensitive content and #CW, when getting this info in from servers outside of mastodon, having certain hashtags get tagged for sensitive content immediately could be useful both on server level and user level.
My brain is a bit fuzzy, so this thought isn’t fully developed atm. I’ll come back.
let me just do you a favor re the Mastodon side of things, im not quite sure where you’re going with the rest of the fediverse bit, but let me do a quick sum up of how to avoid even needing to talk about such features, and do it, mastodon side, with the features i have already laid out. i’ll hit a couple notes on why these features need some sort of official support and implementation, some sort of standard, a bar to set the minimum a client should be doing.
when you are worried about the wider fediverse not properly federating and following the rules you are trying to set forth for say, content warnings, sensitive content and the like, i would like to put forward this example of things in general.
not everyone views the same things as sensitive
this is kind of easy to understand, i think, but adds to what i am about to try to sell to you.
so, if everyone does not have the same standards for what they need to be sensitive (an example is food needs a content warning for some people, but not everyone) is to implement the tool i have laid out at the top of this very thread. people need the ability to decide what they personally need warned about, need to be treated about as sensitive, and the like.
i understand this was decided to be too expensive on the server side in the other thread (Keyword muting for anti-harassment and content filtering) so let me answer how we should deal with this.
a few philosophies need to hold true here, and are things that are purely from the standpoint of the user comes first. we are, after all, creating a social media platform, ie, a place for users to exist.
something that has come up, and which, frankly, astonishes me that was treated as it has been, is the example of different clients not sharing information. this is very simple to fix, we just need, server side, to create the UI to store settings, to set the standard for how these settings should be handled, the level of granularity that needs to be dealt with, so the user does not need to set those same settings multiple times
let me rephrase, as i am getting into jargon and fancy talk about users and experience.
put simply: the philosophy of every individual client to use mastodon with, using THE SAME ACCOUNT, to not share such a simple, central setting as the default posting level of publicity, is absurd. this same philosophy is carried over to muting, to dynamic content warnings, and are simply against the user in terms of usability.
let’s say i personally have a STRONG aversion to food. i do not, but this is an example, and is a thing that is semi-frequent.
i am a user, i sign up for mastodon, and i get some users followed and such. so, early on, i encounter a image of muffins, tagged “food” by the user, but not behind a content warning. i propose that this become the default in many cases, so that for things that are considered to be generally acceptable for public viewing can be publically viewable without the annoying content warning button. then every user could set a setting to add a content warning, to hide that image, so they dont get sick due to the food.
so now i decide i like mastodon, and i go to get a mobile app.
not only am i accidentally posting to the public timeline ALL OVER AGAIN, but i am now finding myself seeing food i really wasn’t expecting, coz like, i muted it already, right? and that only makes seeing the thing i chose to not want to see EVEN WORSE.
this is a thing @Gargron has said will not be stored serverside but i would argue is vitally necessary. please, please listen to me when i say this, and at the very least, create the UI proposed, and set the example for allowing users to properly hide content as makes sense.
back to the topic of this thread and the other, this keyword muting-behind-a-content warning feature being combined with images being hidden being amalgamated into the content warning feature to require a tag means that the UI is much clearer and the user experience is overall so much better.
now that i (hopefully) have changed someones mind, or at least convinced people of why these things are being handled very badly as is, i hope people will listen to me.
i will happily outline a very comprehensive github feature as @maloki suggested, but i am really not up to defending my case again and again, as i have. that is the number one reason i have tuned out of this conversation so long, the only people who do not want these features to change, that i can tell, are people who are eventually convinced that my way is correct! as such, i will happily do these conversations on a person-by-person basis but i’m really tired of reiterating what has already been said. please read the entire thread before stating that this feature should not exist, or that it is losing granularity. please read it all, i know it’s a lot.
Short clarification on one thing:
- ‘Noise filter’, which leaves only the counts-from-followed on a topic
This kind of muting might benefit from having the post auto-CW’d, under a sort of ‘this is filtered out on X thing’ message. I don’t know if that should be the default behavior or if that’d negate the counts thing anyway, but it might be useful for the noise filtering version of the feature, if that’s a thing that does seem useful and implementable.
implementation, some sort of standard,
This. A reference implementation – ideally in a few languages that client devs are likely to be using, but in any language as a bare minimum – goes a long way towards seeing it used in clients, IM (admittedly limited) E.
I mean, there’s always gonna be client devs who decide such-and-such isn’t a thing they care to implement, but anything that reduces the barrier to doing so would be a good step.
I haven’t been in these boards in a while, but I came wondering if anyone has said how obviously missing this feature is, particularly the ability to mute/hide toots having a keyword. I’d even be happy if it was muting a particular hashtag.
Not all situations have to do with harassment either. For example, the Baron makes a perfectly good point about innocent content too. Whether it’s leeches or cat pics (more likely), a person should be able to filter that stuff out entirely if they don’t want to see it. Has nothing to do with a content warning, exactly.
Looks like this is already possible, using a regex in the Advanced field under the settings for a given timeline.
I’ll go back to my grotte now.