Read the article instead of responding to the title. It was a university conducting formal research, which created AI bots that impersonated different identities. “As a black man…” style posts in ChangeMyView.
The subreddit mods issued a formal complaint to the university when they learned of it, but the university is choosing not to block its publishing on the grounds of lack of harm.
You were on the mod side of r/BotDefense? I was a very avid reporter to it (so much so that people thought that I was a bot) and I was eventually added to the secret Bot Defense subreddit that automatically flagged our reports as bots. I jumped ship when the API change came since I saw how deeply vital access to that info was.
Do you know of any active analogous systems for Lemmy? Or do you have any ideas as to what we could implement here to abate bad actors?
Yeah like you I was an avid reporter to r/BotBust which when its owner went off the rails one of the team members then setup BotDefense, and I got recruited to resolve the reports from peeps like yourself and our little counterpart bot that flagged them at a fair old rate.
I ain’t seen nothing like BD over here on Lemmy and some of the bot accounts here are at least listed as such but I have seen numerous ones that aren’t “self labelled”. It’d take a fair amount of effort but if you’ve got enough people to review reports (especially the ones from humans) I can’t see why not as it’s basically looking for common markers / traits / flags.
Secret? As opposed to all the blatant AI bot accounts?
Read the article instead of responding to the title. It was a university conducting formal research, which created AI bots that impersonated different identities. “As a black man…” style posts in ChangeMyView.
The subreddit mods issued a formal complaint to the university when they learned of it, but the university is choosing not to block its publishing on the grounds of lack of harm.
Probably ended with bots fighting bots.
Indeed and it’s exactly what we did in r/BotDefense before greedy piggy spez shut down the API/communities some built.
I used to watch in amazement at some of the guys who set up bots, to report the bots to us.
You were on the mod side of r/BotDefense? I was a very avid reporter to it (so much so that people thought that I was a bot) and I was eventually added to the secret Bot Defense subreddit that automatically flagged our reports as bots. I jumped ship when the API change came since I saw how deeply vital access to that info was.
Do you know of any active analogous systems for Lemmy? Or do you have any ideas as to what we could implement here to abate bad actors?
I had mentioned this idea some time ago but it’s way beyond me to know how to set something like it up. Would you be willing and/or able to help out? What are your suggestions?
Yeah like you I was an avid reporter to r/BotBust which when its owner went off the rails one of the team members then setup BotDefense, and I got recruited to resolve the reports from peeps like yourself and our little counterpart bot that flagged them at a fair old rate.
I ain’t seen nothing like BD over here on Lemmy and some of the bot accounts here are at least listed as such but I have seen numerous ones that aren’t “self labelled”. It’d take a fair amount of effort but if you’ve got enough people to review reports (especially the ones from humans) I can’t see why not as it’s basically looking for common markers / traits / flags.
How does Lemmy differ? Are we architectually bot/AI resistant?
Nope. It’s just so small it doesn’t make sense to do it more than they already do. Lots of bots around already.
Yeah how do they know they were interacting with real users and not another researcher/troll bot?