The past few days have seen a huge uptick in seemingly automated registrations of spammer accounts on my instance, vis.social. I fear that our (Mastodon’s) increasing popularity may make us more of a target for such accounts, and thus increase admins’ time spent on moderation.
What are your current strategies for spam mitigation? And how would you feel about these ideas?
- Option to require human/admin approval before new accounts can post (e.g., so I can evaluate how “spammy” the email address looks).
- Design tweaks to current moderation tools, to enable easier bulk actions (e.g., suspend multiple accounts at once, something which now requires many clicks).
- Option to require certain human-friendly actions, such as: anyone can register, but before you can post, you have to provide a little more evidence that you are human (like a name, photo, bio — not that any of that info has to be real, of course). This would provide additional signal to admins about whether an account is “real” or a spammer. Currently, for an empty profile, all we have to work with is username, email, and IP address. At the moment, email is the best signal, as I can guess something with random letters or a shady domain is probably a spammer.