Spam strategies?


#1

The past few days have seen a huge uptick in seemingly automated registrations of spammer accounts on my instance, vis.social. I fear that our (Mastodon’s) increasing popularity may make us more of a target for such accounts, and thus increase admins’ time spent on moderation.

What are your current strategies for spam mitigation? And how would you feel about these ideas?

  • Option to require human/admin approval before new accounts can post (e.g., so I can evaluate how “spammy” the email address looks).
  • Design tweaks to current moderation tools, to enable easier bulk actions (e.g., suspend multiple accounts at once, something which now requires many clicks).
  • Option to require certain human-friendly actions, such as: anyone can register, but before you can post, you have to provide a little more evidence that you are human (like a name, photo, bio — not that any of that info has to be real, of course). This would provide additional signal to admins about whether an account is “real” or a spammer. Currently, for an empty profile, all we have to work with is username, email, and IP address. At the moment, email is the best signal, as I can guess something with random letters or a shady domain is probably a spammer.

How do invitations work?
#2

many MastoAdmins have had this problem recently. @pfigel had a good thread with a number of contributions: Patrick Figel 🐣: "Anyone else seeing spammy-looking signups from QU…" - Mastodon

I found @Curator@mastodon.art post with the nginx bad-bot-blocker to be the most effective. I implemented V2 and the Fail2Ban jail and have not had an issue since: GitHub - mariusv/nginx-badbot-blocker: Block bad, possibly even malicious web crawlers (automated bots) using Nginx

@Gargron is right and we should take the time to add this to the Admin documentation. An addition to the Security Documentation section would be best > documentation/Security-Guide.md at master · tootsuite/documentation · GitHub
It’s definitely part of administering a server exposed to the internet, but not everyone knows what tools are available to them or how to configure them.

Also, i have not read all of how this BadBot blocker is picking what to block aside from a manually curated list. Adding to this list known bot blacklists from other sources would be a good idea, as well as possibly some sort of ReCaptcha or other human test. Having admins manually aprove all accounts seems burdensome. Further, it may only be a mater of time before spam instances start poping up and trying to connect to other instances in the network, at which point we may need defenses built into Mastodon or ActivityPub to protect ourselves.

EDIT: The bot blocking may not be as effective as I though, i’ve got another 20 bot registrations today. Some sort of captcha challenge is needed within mastodon, and in the moderation panel it would be nice to have a visual indicator when an account is suspended locally.
Chat


#3

Thank you, @Crazypedia, for these links! Indeed, some server-side bot blocking would be super useful, even if not 100% effective. I’m not hosting myself, however, so I’ve shared this with my host @hugogameiro in hopes he may be able to implement aspects of this.

That said, for my instance (and given I don’t have server access nor the technical skills needed for that level of administration), I would definitely make use of human-level UI to help facilitate approving new accounts. I really like social.coop’s customized “request an account” flow. You can’t register yourself, but anyone can “request” to be registered. It’s a great in-between path that works for them.

My instance is also fairly small-scale (currently 1,200 accounts, but far fewer actually active), so it would be reasonable for me to manually review and approve accounts, or delegate to moderators.

I’ve just searched the GitHub issues and don’t see any proposals for revised admin tools or UI to help address spam bots.


#4

I had hoped that Domain blocks or Email domain blocks under Moderation would prevent the creation of accounts with emails from any of these blocked domains, but it doesn’t seem to be the case. What are these options blocking?

I also had hoped that wildcards could be applied to block domains at large. I really really doubt that a genuine user with a domain *.gdm or *.host or *.pw … will ever be interested in an account in our instance.


#5

Also, is it just me or this spam attack looks all in all quite decent, almost a benign warning? I am getting about 10-20 spam accounts created every day, from which about 2-3 post spam content. This can be handled manually while a better solution arrives, even if it is a boring pain. However, what would it take to upgrade to hundreds of accounts sending thousands of messages? Not much?