The past week on Bluesky Social has seen a Kiwi Farms psyops campaign that trust and safety professionals might want to learn from. Since last Sunday, a wave of bots displaying the Kiwi Farms name have been creating a feedback loop by interacting with posts by queer and neurodivergent people. The feedback loop looks something like this: a bot is created with profile details like "Kiwi Farms is watching you" or "You're next on the Kiwi Farms list". The bot's operator has completed the mandatory CAPTCHA but not verified the throwaway email in to reduce available abuse signals. The bot can only follow or like posts, not reply, but it still generates notifications to all those it interacts with. And interact it does -- it searches the global publicly accessible feed for all posts containing terms like "queer", "trans", and "autistic". It likes those posts, leaving a trail of notifications in its wake. When individuals see the notifications, the vast majority ignore or block and report (either to Bluesky T&S or to a user moderation list/labeller tool). But a subset of vulnerable individuals believe that the interaction is bespoke rather than automated, and have a panic response, posting that they're intending to delete everything as they've been found, or that they're afraid to use the platform. Their post inevitably contains the name of the Kiwi Farms website, which the bot then likes, increasing the user's paranoia that they're being watched. And the humans operating the bot can search those posts out, screenshot them as a trophy, and specifically target the most vulnerable users for further harassment and gaslighting on the main forum. This incentivises proliferation of even more bots. What platform measures might prevent this? A few might include: listing the number of likes an account has made (to make clear it is a bot), rate-limiting actions by non-verified users, muting notifications for interactions by non-verified users, and hinting to users when posting about these accounts that they should report harassment rather than publicly react. In the meanwhile, my non-profit organisation End Networked Harassment has volunteered to maintain a public list of the bot accounts (so they can be automatically muted by subscribers), has reported the accounts to Bluesky T&S, and has published guidance to users regarding the nature of the attack to reduce the amplification loop.
One solution that comes to mind using the browser’s fingerprint. When the user passes their CAPTCHA, save their browser fingerprint server-side. If the same user tries to interact with posts (liking, commenting, re-posting), check their current browser fingerprint against the one on file. If it doesn’t match, prompt them to CAPTCHA again. That would ensure the CAPTCHA is tied to the browser, as opposed to the user, so the bot doesn’t inherit it from the admin’s initial authentication. That still has a few gaps, but could maybe be a starting point?
I joined Bluesky three days ago. I followed 9 or 10 peeps I know from conferences (several of whom are KF targets) and made my first post. There was nothing likely to be triggering in my first post except mentioning the Fediverse. Within 60 seconds my post had been liked by 3 Kiwi Farms bots, and 2 "SWP" bots. Immediately deleted my account and went back to looking at puppies and red pandas on Instagram. My bluesky experience lasted less than 5 minutes, but I can attest to the smoothness of sig-up and deletion. I think social networking is a lost cause.
I really wish social network designers would read https://ariadne.space/2021/09/03/how-networks-of-consent-can-fix-social-platforms/ which was written over 3 years ago. Most harassment is easily avoidable by simply not granting rights by default, instead granting limited rights based on context.
After today's "policy" I deleted my account there. It was a moderately ok 4 weeks.
Ugh...I saw a follow right away for my women and non-binary peoples group for Rubyists from a user like this and blocked and reported them. Is that the best approach to deal with them as a user? Just block right away?
Bluesky has a list feature where you can add these accounts and people can automatically block everyone on the list. Do you have bluesky blocklist setup?
spoutible.com is designed to be a safer space for everyone (alternative to Bluesky). https://help.spoutible.com/support/solutions/articles/150000137026-feature-comparison I'm not saying that it would not have this issue, but safety is a primary feature, not an afterthought.
Field CTO @ honeycomb.io
3moOne interesting quirk: the botnet operator appears to be reading our public mute/block list, and as soon as one of its accounts is detected on the list, the account goes silent and renames/reskins itself to taken on the appearance either of a previous interaction target, or a random profile from the network. The bot activity then resumes from a different, brand new account. We can think of several reasons why they might do this - perhaps they believe it will make future abuse reports less likely to be actioned as the account now looks legitimate (and is harder post-rename for targets to find/report), or it may be an attempt to take on the appearance of a false positive to make list operators appear careless or malicious. Let us be clear that this is not a knock on the Bluesky Social team, who have been inundated with new users in the past few weeks and have an employee to user ratio of 1 to 1 million right now. We've appreciated the help of Aaron Rodericks and his team in addressing this abuse pattern, and also understand this is one of several dozen fires they're dealing with right now. https://bsky.app/profile/lizthegrey.com/post/3lbmxyu4iis2z